Chapter 4

Guided Cognition Effects in Learning Mathematics

Abstract

This chapter reports 11 experiments that were done to determine whether Guided Cognition-designed homework facilitates learning middle school mathematics, and if so, to determine how it helps and what is learned. Experiments were performed in two middle schools and included 8th graders in two experiments and 7th graders in nine experiments. Mathematics topics ranged from fractions to integers to geometry. As in the literature experiments, students were in their normal school environment following their regular curriculum and were unaware that their learning was being observed. Guided Cognition design was found to be effective for learning mathematics. Working story problems that were enriched with cognitive events such as role play, divergent thinking, visualizing and illustrating, and relating to prior experience raised scores on unexpected quizzes by about a letter grade. Another unexpected quiz found that the improvements in problem-solving performance persisted for 6   months. Guided Cognition homework was also found to be efficient in that students who worked eight problems and then performed four cognitive events performed as well on unexpected quizzes as students who worked 24 problems in the same time interval. Another pair of experiments determined that modest gains could be made from merely reading completed examples of cognitive events, but that these gains were not long lasting. Performing the cognitive events was found to be most effective for long-term performance. Another experiment found that experiencing cognitive events after working some mathematics problems can help consolidate knowledge of how to work such problems.

Keywords

Advance organizers; Consolidators; Effective homework; Efficient homework; Guided Cognition design; Homework; Long-term or long-lasting learning; Middle school mathematics learning
The experiments in Chapter 3 explored the effects of Guided Cognition homework on learning literature. Literature was chosen as our first subject matter because it includes a great deal of interpretive thinking, as well as specific factual material. The literature studies followed a paradigm that directly substituted Guided Cognition homework for Traditional homework to determine the effectiveness and efficiency of various cognitive events.
In this chapter, we extend our research to mathematics. There were three motivations for this. First, we wanted to determine the generality of our findings for this key content area. Second, we thought that mathematics experiments could afford a level of analytic precision that would provide details of how Guided Cognition promotes learning and more precisely, of what is learned. Third, we wanted to better understand the contexts and constraints under which Guided Cognition can be effective.
Our mathematics experiments use a modification of the literature learning paradigm where, for most experiments, we hold constant the amount of calculation practice, and for the Guided Cognition conditions, provide opportunities to engage in cognitive events analogous to those used in the literature experiments.
For all of the mathematics experiments, students did their unsupervised individual learning activities in class so that we could know that they did their own work and so that we could obtain a high completion rate. For this “in-class homework,” students did their work without interaction with the teacher or with other students, and they were allowed to use their notes, books, and other materials that would have normally been available to use while doing homework.
The first two mathematics-content experiments were somewhat exploratory and served to determine whether Guided Cognition homework could be effective in this domain. Later experiments addressed a number of more analytical questions, as well as some practical homework design issues.

Experiment 12: Can Guided Cognition-Designed Homework Facilitate Learning of Basic Geometry Concepts by Middle School Mathematics Students?

In Experiment 12, 8th-grade students were learning to determine the circumference and area of circles. For an initial in-class homework assignment, all students received exactly the same computation-only problems. For a second in-class homework assignment, students in the Traditional homework condition were given story problems to solve. Students in the Guided Cognition homework condition were given the same story problems and in addition, performed a noncomputational cognitive event for each problem. An unexpected quiz was given to evaluate learning from the two kinds of homework.

Method

Participants

Four classes of 8th-grade pre-algebra students participated. All were taught by the same teacher.

Materials

For the first in-class homework, 16 computation-only problems were prepared. Of these, eight required computation of the circumference of a circle, and eight required computation of the area of a circle. Accompanying diagrams showed a circle with a measurement for either radius or diameter. An example problem is: Find the circumference of a circle with a radius of 6   centimeters.
For the second in-class homework, 10 traditional story problems were prepared that required solving for either the circumference or the area of a circle. Students in Condition T (Traditional) received these 10 problems. Students in Condition GC (Guided Cognition) received the same 10 problems, but in addition, had one of the 10 cognitive events (described in Chapter 3 as materials for Experiments 1 and 2) paired with each problem such that each cognitive event was used once across the 10 problems. The cognitive events included: relate to prior experience, visualize and illustrate, consider divergent methods, brainstorm and evaluate, role play, estimate and measure time, set goals, evaluate quality of work completed, seek validation, and evaluate quantity of work completed.
An example Condition T problem is:
Martha and David want to throw a party for their 8th-grade graduation. They plan to order foods that are shaped like circles. Suppose one of the foods Martha and David order is a jumbo party pizza. How long is the crust  for this jumbo party pizza? Use π   =   3.14. An accompanying picture showed a pizza with the radius labeled r   =   1.5.
A corresponding example Condition GC problem that included the role play cognitive event is:
Part A
Martha and David want to throw a party for their 8th-grade graduation. They plan to order foods that are shaped like circles. Pretend you are Martha or David and write a conversation between Martha and David in which they discuss what foods they will order and the sizes of each food item they will need. In the conversation, be sure to include the terms circumference and radius.
Part B
Suppose one of the foods Martha and David order is a jumbo party pizza. How long is the crust for this jumbo party pizza? Use π   =   3.14. An accompanying picture showed a pizza with the radius labeled r   =   1.5.
A 24-problem quiz was prepared to evaluate learning from the in-class homework. This quiz included three computation-only circumference problems, three story problems about circumference, three computation-only area problems, three story problems about area, and 12 additional problems on other mathematics topics that had been studied during the year. Circumference and area problems were randomly ordered and alternated with problems about the other topics.

Design and procedure

Students were assigned to one of two conditions, Traditional in-class homework (T) or Guided Cognition in-class homework (GC). To equate mathematics ability level across the two conditions, all students were rank-ordered by the averages of their test scores from the first half of the school year, and then one of each successive pair of students was randomly assigned to Condition T and the other to Condition GC. The teaching and in-class homework followed the normal classroom curriculum. Students were not aware that they were participating in a study or that they had been assigned to conditions.
The teacher presented classroom instruction on how to determine the circumference and area of a circle. Definitions, concepts, and relevant equations were presented and discussed. Following this instruction, students practiced solving circle circumference and area problems as in-class homework. For this activity, each student was given a set of 16 problems that included eight circumference and eight area problems.
On the second day, the teacher presented and worked three story problems that required calculating the circumference or area of a circle. After the presentation, the Condition T students were given 10 story problems as in-class homework; these problems required calculating either the circumference or the area of a circle. The Condition GC students were given the same 10 story problems, but each problem included one of the 10 cognitive events. All students worked on these problems for the remainder of the class period and then continued working the problems on the following day. Students who completed the in-class homework before the end of the class period filled the time by working problems from another part of the curriculum that did not include circumference and area.
Two days later, all students were given a previously unannounced 24-problem quiz to evaluate their conceptual and computational understanding of circumference and area.
The experiment timeline is summarized below:
Monday Taught definitions, concepts, and equations for computing circumferences and areas of circles.
In-class homework: 16 computation-only problems.
Tuesday Taught how to solve story problems about circumferences or areas of circles.
In-class homework: Ten Traditional or Guided Cognition story problems.
Wednesday Students continued their in-class homework from the previous day.
Thursday No school and no new material or homework.
Friday Previously unannounced quiz.

image

Results

Data were analyzed for 34 students who participated in all phases of the experiment. Comparison of the means of previous test scores showed that the two groups were very similar in pre-experimental mathematics performance. The mean test scores were 77.8 and 79.6 percent correct for Conditions T (n   =   17) and GC (n   =   17), respectively, F (1,32)   =   0.24, p   =   .630, ns.
The average percent correct on the first in-class homework was 72.4 and 78.3 for Conditions T and GC, respectively, F (1,32)   =   0.40, p   =   .531, ns, showing that the two conditions were closely matched for solving area and circumference computation-only problems. The average percent correct on the second in-class homework was 47.1 and 54.1 for Conditions T and GC, respectively, F (1,32)   =   0.72, p   =   .402, ns, demonstrating that the two conditions were also closely matched in their abilities to solve area and circumference story problems. In contrast, on the unexpected 2-day delayed quiz, Condition GC students outperformed Condition T students by 14.7 percentage points. Percent-correct means were 75.5 and 60.8 for Conditions GC and T, respectively, F(1,32)   =   4.75, p   =   .037.

Discussion

Students were closely matched by their course grades and on their in-class homework computational accuracy, and all students had exactly the same amount of computation practice. Nevertheless, the Condition GC students who performed the noncomputational cognitive events were able to excel by nearly 15 percentage points on an unexpected, 2-day delayed quiz. In practical terms, this is more than a letter grade better than the students who did not experience Guided Cognition homework, assuming letter grades are 10 percentage points apart. This superiority cannot be attributed to the traditional practice of performing computations while solving problems because the Condition GC students had the same problems and computations as the Condition T students. The GC students' added performance in solving problems on the quiz is attributed to better understanding of the concepts underlying the problems, to better retention of the computational steps necessary to solve the problems, or to both, as facilitated by the students' thinking about the characteristics of the problems while performing the cognitive events.

Experiment 13: Can Guided Cognition-Designed Homework Facilitate Learning of Positive and Negative Integer Addition by Middle School Mathematics Students?

Experiment 13 was designed to determine the generality of the Guided Cognition effect for mathematics by having a different group of students perform Traditional and Guided Cognition homework for another mathematics topic. The 8th-grade students in this experiment were learning to add positive and negative integers. For an initial in-class homework assignment, all students received the same computation-only problems. For a second in-class homework assignment, students in the Traditional homework condition were given story problems to solve. Students in the Guided Cognition homework condition were given the same story problems and in addition, performed a noncomputational cognitive event for each problem. An unexpected quiz was given to evaluate learning from the two kinds of homework.

Method

Participants

Two classes of 8th-grade pre-algebra students participated. Both classes were taught by the same teacher.

Materials

For the first in-class homework, 20 integer problems were prepared. Of these, 10 were same-sign addition problems, and 10 were opposite-sign addition problems that required computation using either the number line or the rules for adding integers. Each problem presented only the equation to be solved, and students were asked to perform the computations to solve these equations. An example problem is:   7   +   11   =   ?.
For the second in-class homework, 10 integer story problems were prepared. Five problems required adding integers with the same sign, and five problems required adding integers with opposite signs. To determine the appropriate equations, students were required to understand the concept of negative numbers, their relation to zero, and their relation to positive numbers on the number line. Condition T students received these 10 problems. Condition GC students received the same 10 problems, but in addition, one of the 10 cognitive events was paired with each problem such that each cognitive event was used once across the 10 problems. The cognitive events included: relate to prior experience, visualize and illustrate, consider divergent methods, brainstorm and evaluate, role play, estimate and measure time, set goals, evaluate quality of work completed, seek validation, and evaluate quantity of work completed.
An example of a Condition T problem is:
A bungee jumper descends 30   feet and then bounces up 10   feet. Write an equation using integers to represent this story. Solve the equation to find the bungee jumper's change in height.
A corresponding example of a Condition GC problem that includes the visualize and illustrate cognitive event is:
Part A
A bungee jumper descends 30   feet and then bounces up 10   feet. Draw a picture of this situation. Be sure to label the measurements.
Part B
Write an equation using integers to represent this story. Solve the equation to find the bungee jumper's change in height.
A 24-problem quiz was prepared to evaluate learning from the in-class homework. The quiz included six computation-only problems with same-sign integers; six computation-only problems with opposite-sign integers; six story problems that required interpretation and set up, and computations with same-sign integers; and six story problems that required interpretation and set up, and computations with opposite-sign integers.

Design and procedure

Students were assigned to one of two conditions, Traditional in-class homework (T) or Guided Cognition in-class homework (GC). To equate mathematics ability level across the two conditions, students were rank-ordered by the averages of their previous test grades. One of each successive pair of rank-ordered students was randomly assigned to the T group and the other to the GC group. The teaching and in-class homework followed the normal classroom curriculum. Students were not aware that they were participating in a study or that they had been assigned to conditions.
The teacher presented and discussed concepts related to integers and provided examples of using a number line to add integers of the same sign and to add integers of opposite signs. The next day, the rules for determining answers to integer addition problems were presented and discussed. Following this instruction, students practiced solving computation-only problems as in-class homework. All students were given the same 20 problems, 10 same-sign addition problems and 10 opposite-sign addition problems.
The third day, the teacher presented and worked story problems that required calculating the answers to integer problems. Students were asked to identify key words that indicated whether a number should be negative or positive, to set up the problem, and then to perform the computation.
After the presentation, students were given a second in-class homework assignment. Condition T students were asked to solve 10 story problems that required adding integers with the same sign or with opposite signs. Condition GC students were given the same 10 story problems, but each problem also included one of the 10 cognitive events. Students in both conditions worked on their problems for the remainder of the class period and then continued working the problems on the following day. Students who completed the in-class homework before the end of the class period filled the time by working problems from another part of the curriculum that did not include problems about adding integers.
After a planned 3-day delay, students were given a previously unannounced 24-problem quiz to evaluate their conceptual and computational understanding of adding integers.
The experiment timeline is summarized below:
Tuesday Instruction on adding integers and using the number line.
Wednesday Instruction about integer addition rules.
In-class homework: Computation-only problems.
Thursday Instruction on solving integer story problems.
In-class homework: Either Traditional or Guided Cognition integer story problems.
Friday Continued in-class homework from Thursday.
Weekend No new material or homework.
Monday Previously unannounced quiz.

image

Results

Data were analyzed for 20 students who participated in all phases of the experiment. Comparison of the means of previous test scores showed that the two groups were well matched in mathematics ability. The mean test scores were 61.9 and 65.2 percent correct for Condition T (n   =   11) and Condition GC students (n   =   9), respectively, F (1, 18)   =   0.67, p   =   .423, ns.
The average percent correct on the first in-class homework was 88.6 and 83.9 for the T and GC Conditions, respectively, F (1,18)   =   0.40, p   =   .536, ns, indicating that the two groups were well matched in their ability to solve computation-only integer problems. The average percent correct on the second in-class homework was 71.8 and 80.0 for the T and GC Conditions, respectively, F (1,18)   =   0.59, p   =   .454, ns, showing no significant difference in solving integer story problems. In contrast, on the unexpected 3-day delayed quiz, which reflected learning from the in-class homework, Condition GC students outperformed Condition T students by 15.6 percentage points. Percent-correct means were 86.1 and 70.5 for Conditions GC and T, respectively, F (1,18)   =   7.63, p   =   .013.

Discussion

As in Experiment 12, students in Conditions T and GC of Experiment 13 performed similarly on the in-class homework computation-only problems and story problems, but Condition GC students, who engaged in Guided Cognition thinking, excelled on an unexpected delayed quiz, again performing more than a letter grade better than the students who did not have the Guided Cognition homework. Taken together, the results of Experiments 12 and 13 provide convincing evidence that adding Guided Cognition thinking to mathematics homework, without additional calculation practice, improves students' performance on 2- or 3-day delayed unexpected quizzes.
It is illuminating to compare the results of these two experiments as shown in Figure 4.1. Whether studying geometry (Experiment 12) or integer addition (Experiment 13), students whose homework included the Guided Cognition cognitive events performed reliably and similarly better (about 15 percentage points on an unexpected 2- or 3-day delayed quiz) than did students who had Traditional homework problems. The relative gains were also very similar across the two experiments. If we divide Condition GC quiz performance by Condition T quiz performance, we find that Condition GC students performed 24% better than Condition T students in Experiment 12, and 22% better than Condition T students in Experiment 13.
In the next nine mathematics-content experiments, we explore further what is learned from Guided Cognition homework, the long-term benefits of Guided Cognition homework, the effectiveness of Guided Cognition homework for students of various ability levels, and some practical design issues.
image
Figure 4.1 Performance on unexpected 2-day (Experiment 12) or 3-day (Experiment 13) delayed quizzes for two mathematics topics after working either Traditional or Guided Cognition-designed problems.

Experiment 14: What Specific Skills and Strategies in Mathematics Can Be Learned More Effectively as a Result of Guided Cognition Homework?

Experiments 12 and 13 demonstrated that Guided Cognition study can facilitate subsequent problem-solving performance. What are the students learning from the Guided Cognition homework? Is this style of homework improving their ability to interpret and set up problems, to execute the required calculations, or is it improving their abilities in both of these components of problem solving? Experiment 14 was designed to determine what is learned from Guided Cognition mathematics homework—interpretation, execution, or both.

Method

Participants

Six classes of low- to average-ability 7th-grade middle school mathematics students participated. Two teachers taught three classes each.

Materials

In-class homework materials were constructed for two successive topics—multiplying fractions and mixed numbers, and dividing fractions and mixed numbers. Four story problems were prepared for each topic. Condition T problems were traditional story problems that required students to interpret and set up the problems, then to execute the calculations. Condition GC problems were identical, but also included one of four content-focused Guided Cognition cognitive events: relate to prior experience, visualize and illustrate, consider divergent methods, and role play. For each of the two topics, each of these four cognitive events was paired with one of the four Condition GC problems.
An example of a Condition T problem is:
  1. Ryan has  2234 image  pounds of dog food.
  2. If his dog eats  134 image  pounds of dog food each day, how many days will this supply of dog food last?
  3. Solve this problem.
  4. Remember to show all of your work.
A corresponding example of a Condition GC problem that includes relate to prior experience is:
Part A
  1. Ryan has  2234 image  pounds of dog food.
  2. If his dog eats  134 image  pounds of dog food each day, how many days will this supply of dog food last?
  3. Solve this problem.
  4. Remember to show all of your work.
Part B
  1. Tell about a situation like Ryan's that you have experienced, read about, or can imagine, where knowing how to divide mixed numbers could help you determine how long some supplies will last.
  2. Circle the mixed numbers in your example (but do not work the problem).
Materials for a previously unannounced quiz, described by the teachers as a review activity, were constructed to determine whether the Guided Cognition experience helped students with problem interpretation and set up, or with executing calculations, or with both. The 16-problem review activity consisted of four multiplication and four division numerical problems that required only calculations, and four multiplication and four division story problems that required interpretation and set up, and calculations.
An example numerical-only problem from the review activity is:
535÷213
image
An example story problem from the review activity is:
  1. Susan is cutting wood to make picture frames.
  2. How many pieces of wood that are  1034 image  inches long can she cut from a board that is  7514 image  inches long?

Design and procedure

Traditional and Guided Cognition unsupervised individual learning (in-class homework) conditions were assigned within each class to control for time-of-day and teacher variables. To balance student ability across conditions, students in each class were rank-ordered by their course grade averages for the first 3   months of the school year. In each successive pair of rank-ordered students, one was assigned to Condition T, and one was assigned to Condition GC. Order of assignment to conditions alternated in successive pairs.
The mathematics topics were taught as usual, but evening homework was replaced by in-class homework to assure a high participation rate and to assure that students did their own work. Students who completed the in-class homework before the class period ended were given a mathematics worksheet on another topic for the remaining time. Instructions to the students for the in-class homework were a paraphrase of the following: “Today's homework will be done in class. You may use your book and class notes if needed. Please do your own work, without talking to your classmates, and try to complete the work without help. I have made more than one set of problems, so yours may be a little different from your neighbor's. When you have completed the problems, please bring your papers to me. Then I will give you another worksheet to use for the remainder of the class period.”
Following the teaching and in-class homework for the two topics, students were given a previously unannounced review activity (quiz) consisting of eight numerical problems and eight story problems. Two versions of the review activity were prepared. One version presented the numerical problems before the story problems, and the other version presented the story problems before the numerical problems. Each version of the review activity was given to half the students in each in-class homework condition (T or GC) to counterbalance the time students spent on the two types of problems.
For the review activity, students were asked to put away textbooks, class notes, and calculators. The review activity was described as an opportunity for students to see what they knew about the last two book-sections on multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity was good practice for part of a future chapter test. The review activity was graded for use as data but was not used as a school grade.
The experiment timeline was as follows:
Thursday–Friday Taught multiplying fractions and mixed numbers.
Saturday–Monday Weekend with no new material or homework, followed by a field trip.
Tuesday In-class homework on multiplying fractions and mixed numbers.
Wednesday–Thursday Taught dividing fractions and mixed numbers.
Friday In-class homework on dividing fractions and mixed numbers.
Weekend No new material or homework.
Monday Review activity.

Results

For each set of problems, a grading rubric was designed and used by both teachers. The numerical problems were graded on a 2-point scale that included partial or full credit for performing the calculations. The story problems were graded on a 5-point scale that included points for correct interpretation and set up and points for correct execution of the calculations. All work was scored for partial credit according to the rubrics. Mean pre-experiment grade averages of 96 students who participated in all parts of the experiment were found to be nearly identical across conditions (Condition T, n   =   48, GPA   =   86.4; Condition GC, n   =   48, GPA   =   87.3, t (94)   =   0.58, p   =   .567, ns).
Analysis of review activity (quiz) performance, without regard to problem type, confirmed an overall Guided Cognition effect, GC   >   T, t (94)   =   2.67, p   =   .009. This analysis gives more weight to the story problems because according to the grading rubric, each story problem was worth five points, and each numerical problem was worth two points. This scoring method is analogous to how an actual quiz might be graded with more points based on more complex problems: From that viewpoint, there is nearly a letter grade improvement due to Guided Cognition, assuming that letter grades are 10 percentage points apart. Means are shown in the top row of Table 4.9.
To equalize the weighting of story and numerical problems, point totals for each problem type were converted to percent-correct values for each student. Analysis of in-class homework condition (T vs. GC) by review activity problem type (numerical vs. story) revealed a main effect of homework condition, GC   >   T, F (1,94)   =   8.02, p   =   .006; a main effect of problem type, numerical   >   story, F (1,94)   =   234.71, p   <   .000, and no interaction, F (1,94)   =   0.15, p   =   .697, ns. The means for performance on numerical problems and story problems, when the two types of problems are equally weighted, are shown in Table 4.1, below.

Table 4.1

Mean Percent Correct for Review Activity Problems Based on Total Points for All Problems, and on Equally Weighted Numerical and Story Problems, after Traditional or Guided Cognition In-Class Homework.
Problem Type Review Activity Score after Traditional In-Class Homework Review Activity Score after Guided Cognition In-Class Homework GC   >   T
All problems 55.0 63.5 8.5
Numerical problems 81.1 88.4 7.3
Story problems 44.5 53.6 9.1

image

Discussion

There was no interaction of homework condition (Traditional or Guided Cognition) with review activity problem type (story or numerical), so we cannot say that Guided Cognition homework helps more with one type of problem than the other. The Guided Cognition advantage was found for story problems that required interpretation and set up, and calculations; the advantage was also found for numerical problems that required only calculations. The former result was expected because it was hypothesized that thinking about how to solve problems, even in the absence of calculation practice, should help students interpret and set up similar problems at a later time. It is especially interesting that the Guided Cognition tasks resulted in better execution of calculation-only problems because the cognitive events did not include additional calculation practice—students in both conditions performed exactly the same calculations. This finding implies that even without specific calculation practice, Guided Cognition resulted in more elaborate or more stable representations of the calculation procedures.

Experiment 15: How Does Guided Cognition Homework Promote Learning? Is the Gain Temporary or Is It Long Lasting?

A frustration for teachers and students alike is high performance on an initial exam (e.g., after “cramming”) but low performance on a later exam. Experiment 14 demonstrated that Guided Cognition in-class homework results in better problem solving of both numerical and story problems after a delay of 3–6   days. Is this a relatively short-term gain, possibly mediated by better memory for formulas and procedural mechanics, or does the gain continue over a substantial time? If Guided Cognition thinking can improve stability of skills, either by improving initial understanding or by increasing long-term retrieval, or by a combination of these, then Guided Cognition thinking would be a very valuable addition to the design of mathematics instruction.
Experiment 15 was designed to determine if the Guided Cognition effect lasts over a substantial time interval. To determine long-term consequences of Guided Cognition study, we retested students from Experiment 14 6   months later. This delayed test is a very conservative measure of long-term Guided Cognition effects because during the 6-month interval, the students had reviewed and been tested on fraction skills, and the lasting effects of Guided Cognition could possibly have been modulated by the study-test experiences. Any residual benefit of the Guided Cognition homework would be in addition to this intervening learning.

Method

Participants

The participants were the same six classes of low- to average-ability 7th-grade middle school mathematics students who had participated in Experiment 14. Two teachers taught three classes each.

Materials

As in Experiment 14, review activity materials were constructed to determine whether the Guided Cognition experience helped students with problem interpretation and set up, with executing calculations, or with both. The 16-problem review activity consisted of four multiplication and four division numerical problems that required only calculations, and four multiplication and four division story problems that required interpretation and set up, and calculations.
The Experiment 15 review activity was constructed to be formally similar to the Experiment 14 review activity. For example, a numerical problem from the Experiment 14 review activity is:
31229
image
The corresponding Experiment 15 review activity numerical problem that was authored by changing the numerical values of the Experiment 14 problem while retaining the required calculation steps is:
412211
image
An example of a story problem from the Experiment 14 review activity is:
  1. Dan spent  1212 image   hours last week practicing guitar.
  2. 14 image  of the time was spent practicing chords.
  3. How much time did Dan spend practicing chords?
The Experiment 15 review activity story problem that was created by changing the names and numerical values of the Experiment 14 story problem, while retaining the format and required calculations, is:
  1. John spent  1112 image   hours last week practicing guitar.
  2. 15 image  of the time was spent practicing chords.
  3. How much time did John spend practicing chords?

Design and procedure

On December 15, students completed the initial Experiment 14 review activity on multiplying and dividing fractions and mixed numbers. Six months later, on June 12, students were given a second, previously unannounced review activity consisting of eight numerical problems and eight story problems. Two versions of the review activity were prepared. One version presented the numerical problems before the story problems, and the other version presented the story problems before the numerical problems. Half the students who had been in the Traditional in-class homework condition in Experiment 14 were given one version, and half were given the other version. Similarly, half the students who had been in the Guided Cognition in-class homework condition in Experiment 14 were given one version, and half were given the other.
For the review activity, students were asked to put away their textbooks, class notes, and calculators. The review activity was described as an opportunity to see what they had learned about multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity was good practice for mathematics they would study in the 8th grade. The review activity was graded for use as data but was not used as a school grade.

Results

For each set of problems, a grading rubric, using the same point totals and rules as the Experiment 14 rubric, was designed and used by both teachers. All work was scored for partial credit according to the rubric. Data of students who did all parts of Experiment 14 and who worked on both parts (numerical problems and story problems) of the Experiment 15 review activity were included in the analyses.
Mean pre-experiment grade averages of the 71 students who participated in all parts of Experiments 14 and 15 were found to be nearly identical across conditions (Condition T, n   =   34, GPA   =   86.1; Condition GC, n   =   37, GPA   =   87.9, t (69)   =   1.08, p   =   .286, ns).
Analysis without regard to problem type of in-class homework condition (T vs. GC) by review activity delay (3–6   days vs. 6   months) confirmed a main effect of condition, GC   >   T, F (1,69)   =   7.40, p   =   .008, a main effect of review activity delay, 3–6   days   >   6   months, F (1,69)   =   25.60, p   <   .000, and no interaction, F (1,69)   =   0.03, p   =   .867, ns. This analysis gives more weight to the story problems because according to the grading rubric, each story problem was worth five points, and each numerical problem was worth two points. This unequally weighted analysis is analogous to the way tests are actually graded, with more points from more complex problems, and in that sense, it is a valid analysis to estimate the grade improvement that can be attributed to Guided Cognition homework. Whether after 3–6   days or 6   months, students who had the Guided Cognition homework performed about one grade better, assuming that letter grades are 10 percentage points apart. Means are shown in Table 4.2.

Table 4.2

Mean Overall Percent Correct on the Review Activity after Traditional or Guided Cognition In-Class Homework at 3- to 6-Day and 6-Month Review Activity Delays.
Review Activity Delay Review Activity Score after Traditional In-Class Homework Review Activity Score after Guided Cognition In-Class Homework GC   >   T
All problems after 3–6   days 56.1 66.3 10.2
All problems after 6   months 43.5 52.8 9.3

image

The primary purpose of Experiments 14 and 15 was to determine the effects of Guided Cognition homework on each type of problem. To equalize the weighting of story and numerical problems, point totals for each problem type were converted to percent-correct values for each student. Analysis of in-class homework condition (T vs. GC) by review activity problem type (numerical vs. story) by review activity delay (3–6   days vs. 6   months) revealed a main effect of condition, GC   >   T, F (1,69)   =   7.81, p   =   .007; a main effect of delay, 3–6   days   >   6   months, F (1,69)   =   206.78, p   <   .000; and a main effect of problem type, numerical   >   story, F (1,69)   =   33.14, p   <   .000. There was no interaction of delay and homework condition, F (1,69)   =   0.23, p   =   .634, ns; of problem type and homework condition, F (1,69)   =   0.23, p   =   .635, ns; or of delay by problem type by homework condition, F (1,69)   =   1.25, p   =   .268, ns. There was an interaction of delay and problem type, F (1,69)   =   5.59, p   =   .021, indicating that over the 6-month interval, the drop in performance is greater for numerical problems than for story problems. This interaction may be partly due to the fact that the possible range of decreased performance is much greater for the numerical problems than for the story problems. The relative decrease in performance for each type of problem is, however, nearly identical. For numerical problems, performance on the 6-month delayed review activity was 77.9% of the students' performance on the 3- to 6-day delayed review activity. For story problems, performance on the 6-month delayed review activity was 78.7% of the students' performance on the 3- to 6-day delayed review activity. The average percent correct for each type of problem at near-term and very long-term review activity delays is shown in Figure 4.2.
image
Figure 4.2 Mean percent correct for two types of problems at review activity delays of 3–6   days and 6   months after Traditional or Guided Cognition in-class homework, where students in the Guided Cognition condition performed cognitive events.

Discussion

After Guided Cognition in-class homework, performance was better for both problem types (numerical and story) and for both review activities (after 3–6   days and after 6   months). The lack of interactions with homework conditions implies that the relative value of Guided Cognition study was sustained for over half a year. It is also interesting to note that these students studied for and took a test on this content a few days after the first review activity, so the Guided Cognition effect was not washed out by such extra study and test activities. It is as if the small difference in two in-class homework assignments made a permanent difference in the students' abilities to work the fraction problems.
As in Experiment 14, the lack of interactions of homework condition (T or GC) with review activity problem type (story or numerical) means we cannot say that Guided Cognition helps more with one type of problem than the other. After a 6-month interval, the Guided Cognition advantage was found for story problems that required interpretation and set up, and calculations; this advantage was also found for numerical problems that required only calculations. The former result was expected because it was hypothesized that thinking about how to solve problems, even in the absence of calculation practice, should help students interpret and set up similar problems at a later time. That the Guided Cognition tasks resulted in better execution of calculation-only problems seems less likely and is especially interesting because the cognitive events did not include additional calculation practice—students in both conditions performed exactly the same calculations. This finding implies that even without specific calculation practice, Guided Cognition resulted in more elaborate or more stable representations of the calculations. That the effect lost none of its strength after 6   months suggests that Guided Cognition can provide a highly desirable very long-term performance advantage, which makes Guided Cognition a valuable tool for designing effective mathematics homework.

Experiment 16: Is Guided Cognition Homework Efficient for Learning Mathematics?

In Experiment 14, we evaluated Guided Cognition homework's effectiveness for learning mathematics and reported that Guided Cognition homework improved students' abilities to interpret and work story problems and also to execute calculations for numerical-only problems. The Guided Cognition and Traditional homework groups had the same calculations to perform, but the Guided Cognition students completed additional cognitive events. These activities did not provide additional calculation practice, so it is likely that they reinforced later calculation abilities by increasing comprehension of concepts; by increasing retrievability of concepts, formulas, and definitions; or by increasing both understanding and retrievability.
Our finding that long-term calculation ability can be increased by non-calculation thinking activities (cognitive events), even without performance of mechanics, suggests that homework can be designed to be as efficient, or perhaps even more efficient, if some time is devoted to verbal and visual cognitive events that focus on concepts, formulas, and definitions in place of some of the problem-solving practice.
Following this logic, Experiment 16 was designed to assess Guided Cognition efficiency. Suppose a student has a fixed amount of time (e.g., 1   hour) to practice and learn a particular mathematics topic skill. How should that time be apportioned between calculation practice and Guided Cognition cognitive events?
As a first step in answering this question, we had 7th-grade students, during equal time intervals, either work 12 story problems (Condition T) or work four story problems and perform four non-calculation cognitive events (Condition GC) for each of two topics (multiplying fractions and mixed numbers, and dividing fractions and mixed numbers). Subsequently, we assessed students' mathematical skills with a previously unannounced review activity (quiz) consisting of 12 story problems.

Method

Participants

Six classes of low- to average-ability 7th-grade middle school mathematics students participated. Two teachers taught three classes each.

Materials

In-class homework materials were constructed for two successive topics—multiplying fractions and mixed numbers, and dividing fractions and mixed numbers. Twelve story problems were prepared for each topic, and four cognitive events (with no calculation component) were prepared for each topic. The four cognitive events, studied extensively in our previous research on learning literature, were: relate to prior experience, visualize and illustrate, consider divergent methods, and role play.
An example story problem is:
  1. A pumpkin bread recipe calls for  14 image  teaspoon of salt for one standard-sized loaf.
  2. Emily has five standard-sized loaf pans and one smaller, half-sized loaf pan, so she wants to make  512 image  loaves.
  3. What is the total amount of salt she will need for  512 image  loaves?
  4. Solve the problem.
  5. Remember to show all of your work.
An example of a cognitive event (visualize and illustrate), to be completed without solving the problem, is:
  1. A pumpkin bread recipe calls for  14 image  teaspoon of salt for one standard-sized loaf.
  2. Emily has five standard-sized loaf pans and one smaller, half-sized loaf pan, so she wants to make  512 image  loaves.
  3. Emily needs to determine the total amount of salt she will need for  512 image  loaves.
  4. Do not work Emily's problem, but think about how to work it by doing the following:
    1. Draw a simple diagram showing the pans.
    2. On each standard-sized pan, write in words (for example, “one-fourth teaspoon”) the amount of salt Emily will need.
    3. For the half-sized pan, explain in words how to determine the amount of salt she will need but do not work a problem.
Review activity (quiz) materials were constructed to determine whether the Guided Cognition cognitive events were as efficient for learning to solve problems as additional problem-solving practice. The review activity consisted of 12 story problems that required interpretation and set up, and calculations. Six of these problems required multiplication of fractions and mixed numbers, and six problems required division of fractions and mixed numbers.

Design and procedure

For each topic, the in-class homework was given to students in three parts, and students were allowed 12   minutes to work on each part. All students received the same four story problems for Part A. Condition T students received four more story problems for Part B and another four story problems for Part C for a total of 12 problems to work during the 36-minute session. Condition GC students received two of the four cognitive events for Part B and the other two cognitive events for Part C, so during the 36-minute session, the Condition GC students worked four story problems and also thought about concepts, definitions, and procedures within the contexts of four cognitive events. Pacing of the in-class homework is summarized in Table 4.3.
Traditional and Guided Cognition unsupervised individual learning (or in-class homework) conditions were assigned within each class to control for time-of-day and teacher variables. To balance student ability across conditions, students in each class were rank-ordered by their course grade averages for the first quarter of the school year. In each successive pair of rank-ordered students, one was assigned to Condition T, and one was assigned to Condition GC. The order of assignment to conditions alternated in successive pairs.

Table 4.3

Pacing of In-Class Homework for Each Topic.
Condition Part A 12   Minutes Part B 12   Minutes Part C 12   Minutes
T 4 problems 4 problems 4 problems
GC 4 problems 2 cognitive events 2 cognitive events

image

The mathematics topics were taught as usual, and evening homework was assigned as usual. Then students were away from school for 11 days for the winter holiday break. On the 12th day, Condition T students were given Traditional in-class homework, and Condition GC students were given Guided Cognition in-class homework on multiplying fractions and mixed numbers. The next day Condition T students were given Traditional in-class homework, and Condition GC students were given Guided Cognition in-class homework on dividing fractions and mixed numbers.
For each in-class homework, instructions to the students were a paraphrase of the following: “Today's homework will be done in class. You may use your book and class notes. Please do your own work, without talking to your classmates, and try to complete the work without help. I will pass out the work in three parts. You will have 12   minutes to work on each part. When 12   minutes are up, I will collect your papers and give you the next part. I have made more than one set of problems, so yours may be a little different from your neighbor's.”
Following these instructions, the teacher distributed Parts A, B, and C in succession. Twelve minutes were allowed for working on each of the three parts, and each part was collected before the next was distributed.
One or two days after the second in-class homework (depending on a variable school schedule), students were given a previously unannounced review activity (quiz) consisting of 12 story problems. For the review activity, students were asked to put away their textbooks, class notes, and calculators. The review activity was described as an opportunity to see what they knew about the last two book-sections on multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity would be good practice for part of a future chapter test. The review activity was graded for use as data but was not used as a school grade.
The experiment timeline was as follows:
Thursday Taught multiplying fractions and mixed numbers. Assigned regular evening homework.
Friday Taught multiplying fractions and mixed numbers.
Assigned regular evening homework.
Weekend No new material or homework.
Monday Taught dividing fractions and mixed numbers.
Assigned regular evening homework.
Tuesday (half period) Taught dividing fractions and mixed numbers.
Assigned regular evening homework.
Wednesday (half period) Taught dividing fractions and mixed numbers.
No homework was assigned.
11-day break
Monday In-class homework on multiplying fractions and mixed numbers.
Tuesday In-class homework on dividing fractions and mixed numbers.
Wednesday or Thursday Review activity.

image

Results

For each set of problems, a grading rubric was designed and used by both teachers. All work was scored for partial credit according to the rubrics. Mean pre-experiment grade averages of 94 students who participated in all parts of the experiment were found to be nearly identical across conditions (Condition T, n   =   48, GPA   =   86.2; Condition GC, n   =   46, GPA   =   87.0, t (92)   =   0.47, p   =   .642, ns).
All students received the same four story problems in Part A for each set of homework, so performance on Part A serves as another comparison of a priori problem-solving ability across conditions. Analysis of Part A homework for multiplying fractions and mixed numbers showed the groups were well matched, with percent correct for T   =   46.2 and GC   =   51.0, t (92)   =   0.80, p   =   .427, ns. Similarly, analysis of Part A homework for dividing fractions and mixed numbers showed the groups were well matched, with percent correct for T   =   43.4 and GC   =   41.8, t (92) = 0.35, p   =   .729, ns. Because the review activity included half multiplication and half division problems, the Part A homework scores were combined for the two topics and not surprisingly, showed that overall, the students in the two conditions were well matched for Part A problem solving, T   =   44.8 and GC   =   46.4, t (92)   =     0.37, p   =   .710, ns.

Table 4.4

Learning During In-Class Homework.
Condition T Problems Part A 12   Minutes Part B 12   Minutes Part C 12   Minutes
Multiplication 46.2 57.9 63.0
Division 43.4 61.0 70.8

image

The three parts of Condition T homework were analyzed to determine whether or not students were improving with problem-solving practice. The mean performance for each 12-minute time period is shown in Table 4.4.
Multiplication homework performance increased significantly over the three time periods, as confirmed by a significant linear trend, F (1,47)   =   14.06, p   <   .000. Division homework performance also increased over the three time periods, and this was confirmed by a significant linear trend, F (1,47)   =   49.73, p   <   .000.
Performance on the review activity showed that performing the cognitive events was as helpful and efficient for learning as was more practice solving problems, with percent correct for T   =   56.6 and GC   =   52.1, t (92)   =   1.15, p   =   .252, ns.

Discussion

A previously unannounced review activity (quiz) given 1–3   days after in-class homework found that Condition GC students, who had worked only eight in-class homework problems, performed as well as Condition T students, who had worked 24 in-class homework problems. The significant linear trends for the Condition T homework clearly show that these students were learning from the additional problem-solving practice. By inference, the Condition GC students must have learned from the cognitive events in order to perform as well as the Condition T students on the review activity. In other words, the Condition GC students, who worked a total of eight homework problems and performed eight cognitive event “thinking” exercises, learned as much from the in-class homework as the Condition T students who worked a total of 24 problems. In this experiment, therefore, performing cognitive events was as efficient as spending the same time working more problems.
From these data, we cannot predict the optimal ratio of problem-solving practice to conceptual practice for effective long-term problem-solving performance. The data do show, however, that Guided Cognition tasks can be as efficient as additional computational practice for learning mathematics: Holding homework time strictly constant, students may learn as well from working fewer problems if they spend the additional time performing non-calculation cognitive events.

Experiment 17: What Are the Long-Term Effects on Problem-Solving Ability of Trading Off Some Calculation Practice for Conceptual Thinking?

Experiment 16 had been designed to assess Guided Cognition's efficiency by comparing initial learning from extensive problem-solving practice with initial learning from less problem-solving practice, combined with Guided Cognition tasks that focus students' thinking on conceptual understanding of the problems. During equal time intervals, students either worked 12 story problems (Condition T) or worked four story problems and performed four non-calculation cognitive events (Condition GC) for each of two topics (multiplying fractions and mixed numbers, and dividing fractions and mixed numbers). An unexpected quiz after 1–3   days found that Condition GC students, who had worked only eight homework problems, performed as well as Condition T students, who had worked 24 homework problems. Significant linear trends for the Condition T homework clearly showed that these students had learned from the additional problem-solving practice. By inference, the Condition GC students must have learned from the cognitive events in order to perform as well as the Condition T students on the review activity. In other words, the Condition GC students, who worked a total of eight homework problems and performed eight cognitive event “thinking” exercises learned as much from the in-class homework as the Condition T students who worked a total of 24 problems. In this case, performing cognitive events proved to be as efficient as spending the same time working more problems.
Because initial learning performance does not always predict later performance, it is important to determine which conditions of learning are best for the long term. (For example, high performance on an exam after “cramming” may be followed by relatively low performance later.) Experiment 17 was designed to evaluate the long-term retention of problem-solving ability several months after either (1) Condition T that consisted of extensive problem-solving practice or (2) Condition GC that consisted of less problem-solving practice, combined with Guided Cognition tasks that focused students' thinking on conceptual understanding of the problems. A priori, it is not possible to predict which condition will be better for long-term performance. It is possible that more problem-solving practice produces better long-term memory of how to solve the problems, but it is also possible that the cognitive events lead to improved understanding of the concepts underlying the problems and/or better long-term memory of the procedures.

Method

Participants

The participants were the same six classes of low- to average-ability 7th-grade middle school mathematics students who had participated in Experiment 16. Two teachers taught three classes each.

Materials

As in Experiment 16, review activity (quiz) materials were constructed to determine whether the Guided Cognition cognitive events were as efficient for learning to solve problems as additional problem-solving practice. The review activity consisted of 12 story problems that required interpretation and set up, and calculations. Six of these problems required multiplication of fractions and mixed numbers, and six problems required division of fractions and mixed numbers.
The Experiment 17 review activity was constructed to be formally similar to the Experiment 16 review activity. For example, a story problem from the Experiment 16 review activity is:
  1. Jeff played in a soccer game that lasted  225 image   hours.
  2. Jeff played  13 image  of the game.
  3. How long did Jeff play in the game?
A similar Experiment 17 review activity problem that was authored by changing the names, details, and numerical values of the Experiment 16 problem, while retaining the format and required calculations, is:
  1. Brian rode his bike for  245 image  hours.
  2. Brian pedaled uphill  17 image  of the time.
  3. How long did Brian pedal uphill?

Design and procedure

On January 6 or 7, students completed the Experiment 16 review activity on multiplying and dividing fractions and mixed numbers. Fourteen weeks later, on April 15, students were given a second, previously unannounced review activity consisting of 12 story problems, half requiring multiplication and half requiring division.
For the review activity, students were asked to put away their textbooks, class notes, and calculators. The review activity was described as an opportunity to see what they knew about multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity would be good practice for part of an upcoming standardized test. The review activity was graded for use as data but was not used as a school grade.

Results

For each set of problems, a grading rubric was designed and used by both teachers. All work was scored for partial credit according to the rubric. Data of students who did all parts of Experiment 16 and the Experiment 17 review activity were included in the analyses.
Mean pre-experiment grade averages of the 88 students who participated in all parts of Experiments 16 and 17 were found to be nearly identical across conditions (Condition T, n   =   45, GPA   =   86.0; Condition GC, n   =   43, GPA   =   87.0; t (86)   =   0.64, p   =   .525, ns).
Recall that in Experiment 16, all students received the same four story problems in Part A for the multiplication homework, and all students received the same four story problems in Part A for the division homework. Performance on Part A of the multiplication homework and on Part A of the division homework serves as another comparison of a priori problem-solving ability across conditions for the 88 students who completed both experiments. Analysis of the Experiment 16 Part A homework for multiplying fractions and mixed numbers showed the groups were well matched, with percent correct for T   =   47.7 and GC   =   51.4, t (86)   =   0.60, p   =   .550, ns. Similarly, analysis of the Experiment 16 Part A homework for dividing fractions and mixed numbers showed the groups were well matched, with percent correct for T   =   43.9 and GC   =   42.6, t (86)   =   0.28, p   =   .782, ns. Because the review activity included half multiplication and half division problems, the Part A homework scores were combined for the two topics, and overall, the students in the two conditions were well matched for the Experiment 16 Part A problem solving, T   =   45.8 and GC   =   47.0, t (86)   =   0.27, p   =   .788, ns.
The three parts of Condition T homework were analyzed to confirm that the 45 Condition T students who had completed both Experiments 16 and 17 showed improvement with problem-solving practice. The mean performance for each 12-minute time period is shown in Table 4.5.
Multiplication homework performance increased significantly over the three time periods as confirmed by a significant linear trend, F (1,44)   =   11.32, p   =   .002. Division homework performance also increased over the three time periods, and this was confirmed by a significant linear trend, F (1,44)   =   43.09, p   <   .000.

Table 4.5

Learning During In-Class Homework.
Condition T Problems Part A 12   Minutes Part B 12   Minutes Part C 12   Minutes
Multiplication 47.7 58.5 63.0
Division 43.9 60.7 71.0

image

Table 4.6

Percent Correct on Review Activities after 1–3   Days and 14   Weeks.
Review Activity Delay Condition T Condition GC
1–3   days 57.2 52.3
14   weeks 38.3 38.6
Table 4.6 shows the review activity means for students in Conditions T and GC after 1–3   days (Experiment 16 results) and after 14   weeks (Experiment 17 results). A delay (1–3   days vs. 14   weeks) by condition (T vs. GC) analysis of variance yielded a significant main effect of delay, F (1,86)   =   59.57, p   <   .000. The analysis confirmed that there was no main effect of condition, F (1,86)   =   0.28, p   =   .597, ns, and no interaction of delay by condition F (1,86)   =   1.50, p   =   .225, ns.

Discussion

In Experiment 16, when a previously unannounced quiz was given 1–3   days after in-class homework, Condition GC students, who had worked only eight problems, performed as well as Condition T students, who had worked 24 problems. This same pattern of results was repeated 14   weeks later in Experiment 17, thus demonstrating that, even for very long-term retention, learning from performing the Guided Cognition tasks was as efficient as learning from practicing problem solving. Results from the 14-week delayed quiz indicate that whatever was learned from performing the cognitive events was likely retained as well as what was learned from practicing problem solving.
But what are students learning from performing the cognitive events? Logically, performing these cognitive events may lead to improved understanding of the concepts underlying the problems or may lead to better long-term memory of the procedures, or to both. The results of Experiments 14 and 15 suggest that performing cognitive events promotes both concept comprehension and procedural learning: On an unexpected quiz, students whose homework had consisted of working story problems and performing cognitive events scored better on story problems and on numerical problems, than did students whose homework had consisted of just working story problems.
Taken together, the results of Experiments 14 and 15, and the results of Experiments 16 and 17, indicate that performing cognitive events is both effective and efficient for learning. Experiments 14 and 15 illustrate the effectiveness of Guided Cognition-designed homework by showing that holding the amount of calculation practice constant while adding conceptual thinking tasks (cognitive events) leads to better near-term and also to better very long-term problem-solving performance. Experiments 16 and 17 illustrate the efficiency of Guided Cognition-designed homework by showing that holding study time constant while replacing some of the repetitive calculation practice with cognitive events can result in equivalent performance.

Experiment 18: Can Merely Reading Completed Examples of Cognitive Events Facilitate Learning Mathematics?

Recall that Experiment 14 was designed to determine whether the Guided Cognition-designed homework was improving students' abilities to (a) interpret and set up a story problem, (b) execute the required calculations, or (c) both. In Condition T, some 7th-grade mathematics students worked story problems. In Condition GC, other 7th-grade students worked the same problems and performed a cognitive event for each problem. An unannounced review activity included numerical-only problems and story problems. Results showed that Condition GC students performed better than Condition T students with GC   >   T for numerical-only problems that required calculations, and GC   >   T for story problems that required interpretation and set up, and calculations. The Guided Cognition advantage was about the same for each type of problem.
Experiment 18 was designed to determine whether merely reading completed examples of Guided Cognition tasks facilitates learning in the same way that performing Guided Cognition tasks does. In other words, does merely reading examples of completed cognitive events engage the underlying cognitive processes that promote learning? Pedagogically, there are good reasons this approach might be useful, if it is effective. Reading completed examples of Guided Cognition tasks might take less time than performing the tasks and might still engage the learning-effective underlying cognitive processes. The completed examples are complete and correct, so the extent of learning from them might be greater than learning from less-than-perfect, student-produced answers.
Experiment 18 parallels Experiment 14 in all ways but one: Condition GC students did not complete cognitive events but rather read completed examples of the cognitive events.

Method

Participants

Seven classes of low- to average-ability 7th-grade middle school mathematics students participated. One teacher taught three classes, and a second teacher taught four classes.

Materials

In-class homework materials were constructed for two successive topics—multiplying fractions and mixed numbers, and dividing fractions and mixed numbers. Four story problems were prepared for each topic. Condition T problems were traditional story problems that required students to interpret and set up the problems, then to execute the calculations. Condition GC problems were identical, but also included one of the four Guided Cognition cognitive events that was fully completed so it could be read by the students. The cognitive events were: relate to prior experience, visualize and illustrate, consider divergent methods, and role play. For each of the two topics, each of these four cognitive events was paired with one of the four Condition GC problems.
An example of a Traditional in-class homework problem is:
  1. • A muffin recipe calls for  125 image  tablespoons of vanilla extract for 6 muffins.
  2. Paul is making 18 muffins.
  3. How much vanilla extract will Paul need?
  4. Solve this problem.
  5. Remember to show all of your work.
An example of a Guided Cognition in-class homework problem with the role play cognitive event completed for the student to read is:
Part A
  1. A muffin recipe calls for  125 image  tablespoons of vanilla extract for 6 muffins.
  2. Paul is making 18 muffins.
  3. Pretend you are Paul, and imagine that you are explaining how you will determine the amount of vanilla extract you will need. Here is what you might say: “I need to make 18 muffins for a holiday dinner. I know that I need  125 image  tablespoons of vanilla extract for 6 muffins. To determine the total amount of vanilla extract for 18 muffins, I will first calculate how many groups of 6 muffins are in 18 muffins. Then I will multiply that number by the amount of vanilla extract needed for 6 muffins. To do that, I will change the mixed number to an improper fraction.”
Part B
  1. A muffin recipe calls for  125 image  tablespoons of vanilla extract for 6 muffins.
  2. Paul is making 18 muffins.
  3. How much vanilla extract will Paul need?
  4. Solve this problem.
  5. Remember to show all of your work.
Materials for a previously unannounced quiz, described by the teachers as a review activity, were constructed to determine whether reading the completed Guided Cognition tasks helped students with problem interpretation and set up, or with executing calculations, or with both. The 16-problem review activity consisted of four multiplication and four division numerical problems that required only calculations, and four multiplication and four division story problems that required interpretation and set up, and calculations. These were the same problems used in a previous year for the Experiment 14 review activity.

Design and procedure

Traditional and Guided Cognition unsupervised individual learning (or in-class homework) conditions were assigned within each class to control for time-of-day and teacher variables. To balance student ability across conditions, students in each class were rank-ordered by their course grade averages for the first 3   months of the school year. In each successive pair of rank-ordered students, one was assigned to Condition T, and one was assigned to Condition GC. Order of assignment to conditions alternated in successive pairs.
The mathematics topics were taught as usual, but evening homework was replaced by in-class homework to assure a high participation rate and to assure that students did their own work. Students who completed the in-class homework before the class period ended were given a mathematics worksheet on another topic for the remaining time. Instructions to the students for the in-class homework were a paraphrase of the following: “ Today's homework will be done in class. You may use your book and class notes if needed. Please do your own work, without talking to your classmates, and try to complete the work without help. I have made more than one set of problems, so yours may be a little different from your neighbor's. When you have completed the problems, please bring your papers to me. Then I will give you another sheet to work on for the remainder of the class period.”
Following the teaching and in-class homework of the two topics, students were given a previously unannounced review activity consisting of eight numerical problems and eight story problems. Two versions of the review activity were prepared. One version presented the numerical problems before the story problems, and the other version presented the story problems before the numerical problems. To counterbalance time spent on the two types of problems, each version of the review activity was given to half the students in each in-class homework condition (T or GC).
For the review activity, students were asked to put away textbooks, class notes, and calculators. The review activity was described as an opportunity to see what they knew about the last two book-sections on multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity would be good practice for part of a future chapter test. The review activity was graded for use as data but was not used as a school grade.
The experiment timeline was as follows:
Friday-Monday Instruction on multiplying fractions and mixed numbers.
Tuesday In-class homework on multiplying fractions and mixed numbers.
Wednesday–Thursday Instruction on dividing fractions and mixed numbers.
Friday In-class homework on dividing fractions and mixed numbers.
Weekend No new material or homework.
Monday or Tuesday Review activity.

Results

A grading rubric was designed and used by both teachers. All work was scored for partial credit. Mean pre-experiment grade averages of 92 students who participated in all parts of the experiment were found to be nearly identical across conditions (Condition T, n   =   49, GPA   =   86.3; Condition GC, n   =   43, GPA   =   86.8, t (90)   =   0.30, p   =   .768, ns).
Analysis of review activity (quiz) performance, without regard to problem type, found no Guided Cognition effect, with T and GC Condition means   =   55.9 and 60.4 percent correct, respectively, t (90)   =   1.16, p   =   .249, ns. Further analysis by problem type confirmed no Guided Cognition effect for numerical problems, with T and GC Condition means   =   81.4 and 83.6 percent correct, respectively, t (90)   =   0.49, p   =   .626, ns, and also confirmed no Guided Cognition effect for story problems with T and GC Condition means   =   45.7 and 51.1 percent correct, respectively, t (90)   =   1.20, p   =   .234, ns.
Having found no benefit of merely reading completed cognitive events among the 92 students who participated in the experiment, we decided to look further by examining the performance of only those students who had performed reasonably well on their in-class homework. Data from students who scored 50 percent or better on the multiplication homework problems and on the division homework problems were included in these new analyses. We first determined that students in Condition T and Condition GC were well matched for ability by confirming that the pre-experiment grade averages were nearly identical across conditions (Condition T, n   =   28, GPA   =   89.4, Condition GC, n   =   27, GPA   =   88.4, t (53)   =   0.46, p   =   .649, ns).
Analysis of review activity (quiz) performance, without regard to problem type, confirmed an overall Guided Cognition effect for these students, GC   >   T, t (53)   =   2.07, p   =   .044. This analysis gives more weight to the story problems because according to the grading rubric, each story problem was worth five points, and each numerical problem was worth two points. As was mentioned in Experiment 14, this way of grading performance is analogous to how an actual quiz might be graded, with more points based on more complex problems: From that viewpoint, there is nearly a letter grade improvement because of Guided Cognition, assuming that letter grades are 10 percentage points apart. Means are shown in the top row of Table 4.7.
To equalize the weighting of story and numerical problems, point totals for each problem type were converted to percent-correct values for each student. Analysis of in-class homework condition (T vs. GC) by review activity problem type (numerical vs. story) revealed a borderline main effect of condition, GC   >   T, F (1,53)   =   3.86, p   =   .055; a main effect of problem type, numerical   >   story, F (1,53)   =   96.85, p   <   .000, and no interaction, F (1,53)   =   0.44, p   =   .510, ns. The means for performance on numerical problems and story problems, when the two types of problems are equally weighted, are shown in Table 4.7.

Table 4.7

Mean Percent Correct for Review Activity Problems Based on Total Points for All Problems, and on Equally Weighted Numerical and Story Problems, after Traditional or Guided Cognition In-Class Homework.
Problem Type Review Activity Score after Traditional In-Class Homework Review Activity Score after Guided Cognition In-Class Homework GC   >   T
All problems 60.6 68.5 7.9
Numerical problems 85.5 90.3 4.8
Story problems 50.6 59.8 9.2

image

Discussion

The learning advantage of Guided Cognition found in Experiment 18, where students merely read completed examples of cognitive events, is clearly weaker than the Guided Cognition advantage found in Experiment 14, where students performed the cognitive events. In Experiment 18, the learning advantage was apparent only for a subset of students who scored 50 percent or higher on each of their two sets of homework. As in Experiment 14, there was no interaction of homework condition (Traditional or Guided Cognition) with review activity problem type (story or numerical), so once again we cannot say that Guided Cognition homework helps more with one type of problem than the other. For these students, the Guided Cognition advantage was found for story problems that required interpretation and set up, and calculations; the advantage was also found for numerical problems that required only calculations.
A practical implication of these results is that embedding completed examples of Guided Cognition tasks into assigned story problems may facilitate subsequent performance in interpreting and solving story problems, and in performing calculations for plain numerical problems. Typically, homework problems do not contain such explanatory and exemplary content: This is usually confined to the introduction of each topic in mathematics textbooks. The results of Experiment 18 suggest that such content, if embedded with assigned practice problems, may boost the value of practice, but only if students do reasonably well on their homework.
The next experiment was designed to determine whether the advantage of merely reading completed examples of cognitive events lasts for a very long time.

Experiment 19: Is There Very Long-Term Improvement in Problem-Solving Performance When Completed Guided Cognition Examples Were Merely Read?

From Experiments 14 and 15, we know that performing Guided Cognition homework can improve 7th-grade students' abilities to interpret and work story problems, and also to execute calculations for non-story problems, and that these improvements persist for at least 6   months. In Experiment 18, 7th-grade students worked story problems (Condition T) or worked story problems and read completed examples of Guided Cognition tasks (Condition GC). Performance on a 3- to 6-day delayed previously unannounced quiz showed that merely reading completed examples of Guided Cognition homework cognitive event tasks can improve 7th-grade students' abilities to interpret and work story problems and also can help them learn to solve plain numerical problems. The positive effects of merely reading completed examples of cognitive events were, however, weaker than those in Experiment 14 where students performed the cognitive events. Does the benefit of merely reading completed examples last for many months, parallel to the results of Experiment 15? If reading completed examples of cognitive events can improve stability of skills, either by improving initial understanding or by increasing very long-term retrieval, or by a combination of these, then providing such examples within the context of problems would be a valid design goal for mathematics instruction.
In Experiment 19, students from Experiment 18 were retested 5   months later to determine whether the benefits of reading completed examples of Guided Cognition cognitive events would last over a substantial time interval. As was true for Experiment 15, this is a very conservative test of long-term Guided Cognition effects because during the 5-month interval, the students had reviewed and been tested on fraction skills, and the lasting effects of the Guided Cognition in-class homework could possibly have been modulated by that study-test experience. Any residual benefit of Guided Cognition would be in addition to this intervening learning.

Method

Participants

The participants were the same seven classes of low- to average-ability 7th-grade middle school mathematics students who had participated in Experiment 18. One teacher taught three classes, and a second teacher taught four classes.

Materials

As in Experiment 18, review activity materials were constructed to determine whether the Guided Cognition experience helped students with problem interpretation and set up, or with executing calculations, or with both. The 16-problem review activity consisted of four multiplication and four division numerical problems that required only calculations, and four multiplication and four division story problems that required interpretation and set up, and calculations. These were the same problems used in a previous year for the Experiment 15 review activity.
The Experiment 19 review activity was constructed to be formally similar to the Experiment 18 review activity. Experiment 19 numerical problems were authored by changing the numerical values of Experiment 18 numerical problems while retaining the required calculation steps. Similarly, story problems in Experiment 19 were authored by changing the names, details, and numerical values of Experiment 18 story problems while retaining the logical structures and required set up and calculation steps.

Design and procedure

On December 20 or 21, students completed the Experiment 18 review activity on multiplying and dividing fractions and mixed numbers. Five months later, on May 12, students were given a second, unannounced review activity consisting of eight numerical problems and eight story problems. Two versions of the review activity were prepared. One version presented the numerical problems before the story problems, and the other version presented the story problems before the numerical problems. Half the students who had been in the Traditional in-class homework condition in Experiment 18 were given one version, and half were given the other version. Similarly, half the students who had been in the Guided Cognition homework condition in Experiment 18 were given one version, and half were given the other.
For the review activity, students were asked to put away their textbooks, class notes, and calculators. The review activity was described as an opportunity to see what they knew about multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity was good practice for mathematics they would study in the 8th grade. Students did their own work with no help from the teachers or from other students. The review activity was graded for use as data but was not used as a school grade.

Results

For each set of problems, a grading rubric was designed and used by both teachers. All work was scored for partial credit according to the rubric. Data from students who scored 50% or better on the multiplication homework problems and on the division homework problems in Experiment 18 and who participated in all parts of Experiments 18 and 19 were included in the analyses. Pre-experiment grade averages for these students were nearly identical across conditions (Condition T, n   =   28, GPA   =   89.4, Condition GC, n   =   27, GPA   =   88.4, t (53)   =   0.46, p   =   .649, ns).
Analysis without regard to problem type of in-class homework condition (T vs. GC) by review activity delay (3–6   days vs. 5   months) found no main effect of condition, GC vs. T, F (1, 53) = 0.01, p   =   .926, ns, a main effect of review activity delay, 3–6   days   >   5   months, F (1, 53)   =   17.98, p   <   .000, and a significant condition by delay interaction, F (1, 53)   =   5.27, p   =   .026. The interpretation of this interaction is made clear by the means shown in Table 4.8.
Performance on the 3- to 6-day delayed review activity was 7.9 percentage points better for the students who had the Guided Cognition homework when compared to students who had the Traditional homework, but after 5   months this effect was reversed such that students who had Traditional homework performed 7.1 percentage points better than the students who had the Guided Cognition homework.

Table 4.8

Mean Overall Percent Correct on the Review Activity after Traditional or Guided Cognition In-Class Homework at 3- to 6-Day and 5-Month Review Activity Delays.
Review Activity Delay Review Activity Score after Traditional In-Class Homework Review Activity Score after Guided Cognition In-Class Homework GC   >   T
All problems after 3–6   days 60.6 68.5 7.9
All problems after 5   months 54.2 47.1 7.1

image

This analysis gives more weight to the story problems because according to the grading rubric, each story problem was worth five points, and each numerical problem was worth two points. To equalize the weighting of story and numerical problems, point totals for each problem type were converted to percent-correct values for each student. Analysis of homework condition (T vs. GC) by review activity problem type (numerical vs. story) by review activity delay (3–6   days vs. 5   months) revealed no main effect of condition, F (1, 53)   =   0.00, p   =   .997, ns; a main effect of delay, 3–6   days   >   5   months, F (1,53)   =   22.15, p   <   .000; and a main effect of problem type, numerical   > story, F (1,53)   =   105.24, p   <   .000. There was a borderline interaction of delay with condition, F (1,53)   =   3.69, p   =   .060, reflecting the findings, illustrated in Figure 4.3, that GC   >   T on a 3- to 6-day delayed review activity, but T   >   GC on a 5-month delayed review activity. There was no interaction of problem type with condition, F (1,53)   =   0.16, p   =   .688, ns; or of delay by problem type by condition, F (1,53)   =   0.31, p   =   .582, ns. There was an interaction of delay and problem type, F (1,53)   =   12.06, p   =   .001, indicating that over the 5-month interval, the drop in performance is greater for numerical problems than for story problems.
image
Figure 4.3 Mean percent correct for two types of problems at review activity delays of 3–6   days and 5   months after Traditional or Guided Cognition in-class homework, where students in the Guided Cognition condition merely read completed cognitive events.

Discussion

In Experiment 14, students who performed cognitive events as part of their Guided Cognition homework scored better on a 3- to 6-day delayed review activity, compared to students who had Traditional homework. When retested 6   months later, in Experiment 15, the students who had worked the cognitive events as part of their Guided Cognition homework continued to score better than students who had done the Traditional homework.
In Experiment 18, we analyzed the results of students who scored 50 percent or better on each of the multiplication and division homework assignments. Students in the Guided Cognition condition, who merely read completed examples of cognitive events, performed somewhat better on a 3- to 6-day delayed review activity compared to students who had Traditional homework. In sharp contrast, when retested 5   months later, in Experiment 19, we obtained the opposite results—the students who had Traditional homework performed better than the students who had read the completed cognitive events as part of their homework.
It is likely that reading the completed cognitive events provided too much help such that students in Condition GC had to think less than students in Condition T to work the homework problems. By providing the completed cognitive events, we may have short-circuited students' essential thinking about how to solve the problems. In the near term (3–6   days), students may have recalled some of the guidance provided by the cognitive event examples, and this may have helped them solve the problems better than the Condition T students. If over time this specific guidance is forgotten, these students may remember less about how to solve the problems than the Condition T students who had to work out how to solve the homework problems without guidance.
These results join a long list of counterintuitive results that have been found in studies of human learning where making initial cognitive processing a bit more difficult results in better performance later (and conversely, making initial cognitive processing easier results in poorer performance later). For example, experiments have found that performance on initial recall tests of word pairs decreased as the initial test delay increased, but later, on a second test, recall of the word pairs increased as the initial test delay had increased. An explanation of this “spacing of tests effect” is that successful recall on an initial test requires more or different cognitive processing as the initial test is delayed, and this extra cognitive processing creates a more retrievable long-term memory and better recall on a later, second test (Whitten, 2011a; Whitten & Bjork, 1972, 1977).
Considering the results of Experiments 14, 15, 18, and 19, the recommendations are clear. For very long-term benefits, students need to perform the cognitive events. The results of these experiments suggest the following: Working through the steps of the cognitive events activates learning-effective cognitive processes. Merely reading cognitive events is a sort of crutch that may help moderately in the near term, probably by providing guidance that can be recalled for a few days. Ultimately, however, merely reading cognitive events is detrimental for very long-term performance. Once recall of the specific guidance fails, these students have less understanding of, and lower recall of, how to work the problems than Condition T students who had to work through the logic and mechanics on their own. So for the long term, working problems and performing corresponding cognitive events results in the best performance, working problems that are not enriched with cognitive events is next best, and working problems but merely reading completed examples of corresponding cognitive events is least effective.
It is important to contrast these results with seemingly contradictory results reported in the learning literature. Sweller and Cooper (1985), for example, have found that students benefit from seeing step-by-step completed examples of mathematics problems. There is little doubt that showing students procedures and patterns can help them solve similar problems later, assuming that the students recognize the problem type, recall the specific pattern, and then apply it. The cognitive events in our experiments differ from such examples in that they do not show all of the steps for working problems, but instead encourage specific sorts of cognitive processes. If performed, the cognitive events elicit these learning-effective processes, but if merely read, they apparently do not.

Experiment 20: Is Guided Cognition Homework Beneficial for Mathematics Students of Various Ability Levels?

Experiment 20 was designed to evaluate the generality of Guided Cognition benefits across student ability levels. We considered performing an experiment with advanced- and average-ability levels, but in the middle school where we conducted our experiments, these two ability levels use different books and have different curricula. Consequently, we decided to compare Guided Cognition effects for low- and average-ability students who are taught together, using the same materials. In the school where we performed the experiment, low- to average-level students filled eight classrooms, and advanced-level students filled four classrooms, so our examination of low-ability and average-ability students looked at the lower and middle thirds of ability within the school. To answer the question of the planned Experiment 20, we were able to reanalyze the data of Experiment 14, adding ability level as a variable.
Based on their course grades from the first 3   months of the school year, students were rank-ordered, then partitioned to create low- and average-ability groups. Within each ability level, half received Traditional and half received Guided Cognition study activities. To determine whether Guided Cognition was helpful across these ability levels, students completed a review activity on the types of problems in their homework.

Method

Participants

Six classes of low- to average-ability 7th-grade middle school mathematics students participated. Two teachers taught three classes each.

Materials

In-class homework materials were constructed for two successive topics—multiplying fractions and mixed numbers, and dividing fractions and mixed numbers. Four story problems were prepared for each topic. Condition T problems were traditional story problems that required students to interpret and set up the problems, then to execute the calculations. Condition GC problems were identical, but also included one of the four Guided Cognition cognitive events: relate to prior experience, visualize and illustrate, consider divergent methods, and role play. For each of the two topics, each of these four cognitive events was paired with one of the four Condition GC problems.
Materials for a previously unannounced quiz, described by the teachers as a review activity, were constructed to determine whether the Guided Cognition experience helped students with problem interpretation and set up, or with executing calculations, or with both. The 16-problem review activity consisted of four multiplication and four division numerical problems that required only calculations, and four multiplication and four division story problems that required interpretation and set up, and calculations.

Design and procedure

Traditional and Guided Cognition unsupervised individual learning (or in-class homework) conditions were assigned within each class to control for time-of-day and teacher variables. To balance student ability across conditions, students in each class were rank-ordered by their course grade averages for the first 3   months of the school year. In each successive pair of rank-ordered students, one was assigned to Condition T, and one was assigned to Condition GC. Order of assignment to conditions alternated in successive pairs.
The mathematics topics were taught as usual, but evening homework was replaced by in-class homework to assure a high participation rate and to assure that students did their own work. Students who completed the in-class homework before the class period ended were given a mathematics worksheet on another topic for the remaining time. Instructions to the students for the in-class homework were a paraphrase of the following: “Today's homework will be done in class. You may use your book and class notes if needed. Please do your own work, without talking to your classmates, and try to complete the work without help. I have made more than one set of problems, so yours may be a little different from your neighbor's. When you have completed the problems, please bring your papers to me. Then I will give you another worksheet to use for the remainder of the class period.”
Following the teaching and in-class homework of the two topics, students were given a previously unannounced review activity consisting of eight numerical problems and eight story problems. Two versions of the review activity were prepared. One version presented the numerical problems before the story problems, and the other version presented the story problems before the numerical problems. Each version of the review activity was given to half the students in each in-class homework condition (T or GC) to counterbalance time spent on the two types of problems.
For the review activity, students were asked to put away their textbooks, class notes, and calculators. The review activity was described as an opportunity to see what they knew about the last two book sections on multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity was good practice for part of a future chapter test. The review activity was graded for use as data but was not used as a school grade.
The experiment timeline was as follows:
Thursday–Friday Taught multiplying fractions and mixed numbers.
Saturday–Monday Weekend with no new material or homework, followed by a field trip.
Tuesday In-class homework on multiplying fractions and mixed numbers.
Wednesday–Thursday Taught dividing fractions and mixed numbers.
Friday In-class homework on dividing fractions and mixed numbers.
Weekend No new material or homework.
Monday Review activity.

Results

For each set of problems, a grading rubric was designed and used by both teachers. All work was scored for partial credit according to the rubrics. To confirm that we had achieved our goal of stratifying ability without bias toward either condition, we analyzed the GPAs of students who participated in all parts of the experiment. The mean pre-experiment grade averages of the 48 low-ability students were found to be nearly identical across conditions (Condition T, n   =   24, GPA   =   80.3; Condition GC, n   =   24, GPA   =   81.4), and the mean pre-experiment grade averages of 48 average-ability students were found to be nearly identical across conditions (Condition T, n   =   24, GPA   =   92.5; Condition GC, n   =   24, GPA   =   93.1). Analysis of variance confirmed a main effect of ability level, F (1, 92)   =   153.33, p   <   .000, no main effect of condition assignment, F (1, 92)   =   0.86, p   =   .355, ns, and no interaction of ability level with condition assignment, F (1, 92)   =   0.08, p   =   .779, ns.
Analysis of review activity (quiz) performance, without regard to problem type, confirmed a main effect of homework condition, GC   >   T, F (1, 92)   =   7.93, p   =   .006; a main effect of ability level, average   >   low, F (1, 92)   =   12.58, p   =   .001; and no interaction, F (1, 92)   =   0.11, p   =   .743. Thus, Guided Cognition facilitated learning similarly for low- and average-ability students.
As in previous experiments, this analysis gives more weight to the story problems because according to the grading rubric, each story problem was again worth five points, and each numerical problem was again worth two points. As was mentioned in the results of Experiment 14, this way of grading the quiz is analogous to how an actual quiz might be graded, with more points based on more complex problems: This method of grading shows nearly a letter grade improvement due to Guided Cognition, assuming that letter grades are 10 percentage points apart. Means are shown in the top third of Table 4.9.

Table 4.9

Mean Percent Correct for Review Activity Problems Based on Total Points for All Problems, and on Equally Weighted Numerical and Story Problems, after Traditional or Guided Cognition In-Class Homework for Low- and Average-Ability Students.
Problem Type Student Mathematics Ability Level Review Activity Score after Traditional In-Class Homework Review Activity Score after Guided Cognition In-Class Homework GC   >   T
All problems Low 49.0 58.6 9.6
Average 60.9 68.5 7.6
Numerical problems Low 79.7 84.4 4.7
Average 82.6 92.5 9.9
Story problems Low 36.8 48.3 11.5
Average 52.2 58.9 6.7

image

To equalize the weighting of story and numerical problems, point totals for each problem type were converted to percent-correct values for each student. Analysis of in-class homework condition (T vs. GC) by review activity problem type (numerical vs. story) by ability level (low vs. average) revealed a main effect of homework condition, GC   >   T, F (1,92)   =   8.80, p   =   .004; a main effect of problem type, numerical   >   story, F (1,92)   =   239.28, p   <   .000, a main effect of ability level, average   >   low, F (1,92)   =   11.12, p   =   .001, and no significant interactions. These results indicate that Guided Cognition facilitated learning similarly for low- and average-ability students for both types of problems. Means are shown in the lower two-thirds of Table 4.9.

Discussion

These results clearly show that students performing at low and average levels can benefit from Guided Cognition homework tasks. It is especially important to confirm that students in the lowest third of ability can benefit because these students are most at risk for failure. Looking at the scores in Table 4.9, it is obvious that many students, in what amounts to two-thirds of the school's population, have difficulty with the story problems. Even so, Guided Cognition study improved their performance on these problems, as it did on the numerical problems.
It is also of interest to note that on the review activity, low-ability students who had Guided Cognition homework performed at levels similar to those of average-ability students who had Traditional homework. For numerical problems, low-ability students who had Guided Cognition homework scored 84.4 percent correct, and average-ability students who had Traditional homework scored 82.6 percent correct. For story problems, low-ability students who had Guided Cognition homework scored 48.3 percent correct, and average-ability students who had Traditional homework scored 52.2 percent correct. Thus, studying with Guided Cognition homework makes low-ability students look much like average-ability students.

Experiment 21: Is the Guided Cognition Advantage Maintained Over a Very Long Time for Low- and Average-Ability Students?

It is possible to find similar near-term advantages for both low- and average-ability students, but also to find that, over time, the relative advantages vary as a function of ability. Experiment 21 was designed to determine whether the Guided Cognition advantage is maintained over a very long time for low- and average-ability students and to determine in more detail whether the advantage is retained for solving story problems, which require interpretation and set up, and calculations, and whether the Guided Cognition advantage is also retained for numerical problems that only require calculations.

Method

Participants

The participants were the same six classes of low- to average-ability 7th-grade middle school mathematics students who had participated in Experiment 20. Two teachers taught three classes each.

Materials

As in Experiment 20, review activity materials were constructed to determine whether the Guided Cognition experience helped students with problem interpretation and set up, with executing calculations, or with both. The 16-problem review activity consisted of four multiplication and four division numerical problems that required only calculations, and four multiplication and four division story problems that required interpretation and set up, and calculations.
The Experiment 21 review activity was constructed to be formally similar to the Experiment 20 review activity. Experiment 21 review activity numerical problems were authored by changing the numerical values of the Experiment 20 review activity numerical problems while retaining the required calculation steps. Similarly, Experiment 21 review activity story problems were authored by changing names, details, and numerical values of the Experiment 20 review activity story problems while retaining the logical structures and required set up and calculation steps.

Design and procedure

On December 15, students completed the Experiment 20 review activity on multiplying and dividing fractions and mixed numbers. Six months later on June 12, for Experiment 21, these students were given a second, unannounced review activity consisting of eight numerical problems and eight story problems. Two versions of the review activity were prepared. One version presented the numerical problems before the story problems, and the other version presented the story problems before the numerical problems. Half the students who had been in the Traditional in-class homework condition in Experiment 20 were given one version, and half were given the other version. Similarly, half the students who had been in the Guided Cognition in-class homework condition in Experiment 20 were given one version, and half were given the other.
For the review activity, students were asked to put away their textbooks, class notes, and calculators. As in previous experiments, the teachers described the review activity as an opportunity for the students to see what they knew about multiplying and dividing fractions and mixed numbers. Students were asked to do their best to complete all the problems and were told that the review activity was good practice for mathematics they would study in the 8th grade. The review activity was graded for use as data but was not used as a school grade.

Results

For each set of problems, a grading rubric was designed and used by both teachers. All work was scored for partial credit according to the rubric. Data of students who did all parts of Experiment 20 and who worked on both parts (numerical problems and story problems) of the Experiment 21 review activity were included in the analyses.
The mean pre-experiment grade averages of 36 low-ability students were found to be very similar across conditions (Condition T, n   =   17, GPA   =   79.6; Condition GC, n   =   19, GPA   =   82.9), and the mean pre-experiment grade averages of 35 average-ability students were found to be nearly identical across conditions (Condition T, n   =   17, GPA   =   92.5; Condition GC, n   =   18, GPA   =   93.2). Analysis of variance confirmed a main effect of ability level, F (1, 67)   =   114.19, p   <   .000, no main effect of homework condition assignment, F (1, 67)   =   3.51, p   =   .065, ns, and no interaction of ability level with homework condition assignment, F (1, 67)   =   1.39, p   =   .243, ns.
Analysis of review activity (quiz) performance, without regard to problem type, confirmed a main effect of homework condition, GC   >   T, F (1,67)   =   8.96, p   =   .004, a main effect of review activity delay, 3–6   days   >   6   months, F (1,67)   =   26.43, p   <   .000, and a main effect of ability level, average   >   low, F (1,67)   =   13.64, p   <   .000. The only significant interaction was delay by ability level, F (1,67)   =   4.54, p   =   .037, indicating that lower-ability students forgot more over the 6-month interval.
As explained in previous experiments, this analysis of total points gives more weight to the story problems because according to the grading rubric, each story problem was worth five points, and each numerical problem was worth two points, but this is a valid method for estimating the grade improvement that can be attributed to Guided Cognition homework. Whether after 3–6   days or 6   months, low- and average-ability students who had the Guided Cognition homework performed about one grade better, assuming that letter grades are 10 percentage points apart. Means are shown in the top third of Table 4.10.
To equalize the weighting of story and numerical problems, point totals for each problem type were converted to percent-correct values for each student. Analysis of in-class homework condition (T vs. GC) by review activity problem type (numerical vs. story) by ability level (low vs. average) by review activity delay (3–6   days vs. 6   months) revealed a main effect of homework condition, GC   >   T, F (1,67)   =   9.60, p   =   .003; a main effect of problem type, numerical   >   story, F (1,67)   =   212.62, p   <   .000; a main effect of ability level, average   >   low, F (1,67)   =   14.30, p   <   .000; and a main effect of review activity delay, 3–6   days   >   6   months, F (1,67)   =   35.08, p   <   .000.
There were no interactions with homework condition, indicating that overall, Guided Cognition homework helped low- and average-ability students after short and long delays, and with both types of problems. There was an interaction of review activity delay and problem type, F (1,67)   =   5.55, p   =   .021, indicating that over the 6-month interval, the drop in performance is greater for numerical problems than for story problems. There was also a review activity delay by ability level interaction, F (1,67)   =   5.23, p   =   .024, indicating that the low-ability students forgot more over the longer delay. Means are shown in the lower two-thirds of Table 4.10.

Table 4.10

Mean Percent Correct for Review Activity Problems Based on Total Points for All Problems, and on Equally Weighted Numerical and Story Problems, after Traditional or Guided Cognition In-Class Homework for Low- and Average-Ability Students at 3- to 6-Day and 6-Month Review Activity Delays.
Problem Type Review Activity Delay Student Mathematics Ability Level Review Activity Score after Traditional In-Class Homework Review Activity Score after Guided Cognition In-Class Homework GC   >   T
All problems 3–6   days Low 51.7 64.0 12.3
Average 60.6 68.8 8.2
6   months Low 35.7 43.2 7.5
Average 51.3 62.9 11.6
Numerical problems 3–6   days Low 77.4 89.6 12.2
Average 81.8 92.5 10.7
6   months Low 61.0 55.3 5.7
Average 66.5 83.7 17.2
Story problems 3–6   days Low 41.7 54.2 12.5
Average 52.5 59.6 7.1
6   months Low 25.6 38.4 12.8
Average 45.2 54.6 9.4

image

Discussion

These results clearly show that students performing at low and average levels can benefit from Guided Cognition homework tasks over a substantial time period. It is especially important to confirm that the lowest third of ability can benefit because these students are most at risk for failure. With the possible exception of numerical-only problems for low-ability students, the improvements held for half a year. Although this exception did not reach statistical significance, it suggests that lower-ability students may need more procedural practice, along with the more conceptual contributions of Guided Cognition study tasks, to retain numerical problem-solving skills over the long term.

Experiment 22: Is Guided Cognition More Effective as an “Advance Organizer” or as a “Consolidator”?

We have ample evidence that Guided Cognition homework promotes learning. Experiment 22 was designed to learn more about how this happens. In particular, we asked, “Is Guided Cognition homework effective because it serves as an advance organizer and prepares students for understanding the problems they will work? Or is Guided Cognition homework effective because it helps students consolidate information about problems after they have been worked?”
The concept of advance organizers was introduced by Ausubel (e.g., 1960, 1968) and has been extended to include assimilation encoding theory (e.g., Mayer, 1979; Mayer & Bromage, 1980). Advance organizers, according to Ausubel (1968), provide ideational scaffolding for more detailed material that will be presented for learning. We are adapting the theoretical idea of advance organizers to include cognitive events that do not include a complete mathematics problem to solve, but that do include enough details to guide students' thinking about a type of problem and about the general approach for solving that type of problem. An effective advance organizer should make subsequent problem-solving practice more meaningful and more memorable, which in turn, should facilitate solving similar problems in the future.
Alternatively, a cognitive event that follows problem-solving practice may serve as a “consolidator” by providing a framework for reviewing a particular type of problem, thereby increasing meaningfulness and memory for the concepts, formulas, definitions, and procedures of that type of problem. This result should facilitate solving similar problems in the future.
Mathematics instructional materials can be designed so that Guided Cognition tasks occur before or after problem-solving practice. So, is performance on future problem solving facilitated more by experiencing cognitive events before practicing related problems (i.e., as an advance organizer), or by experiencing these cognitive events after practicing the problems (i.e., as a consolidator)? Of course, it is possible that Guided Cognition homework is effective both as an advance organizer and as a consolidator. This experiment was designed to determine which is the stronger effect.

Method

Participants

Six classes of low- to average-ability 7th-grade middle school mathematics students participated. Two teachers taught three classes each.

Materials

In-class homework materials were constructed on the topics of area and perimeter of parallelograms, triangles, trapezoids, and circles. For each shape, the homework included one traditional story problem that required students to interpret and set up the problem, and then to execute the calculations.
As an example, this is the story problem requiring calculation of the area of a circle:
  1. A circular skating rink has a circumference of 125.6   feet and a radius of 20   feet.
  2. What is the area of the skating rink?
  3. Solve this problem.
  4. Remember to show all of your work.
The in-class homework also included four Guided Cognition tasks, one each of the four cognitive events: relate to prior experience, visualize and illustrate, consider divergent methods, and role play. Each of the four geometric shapes was included once across the set of four cognitive events. The cognitive events did not include enough numbers to work a problem, and the instructions told students not to work a problem.
As an example, this is the Guided Cognition task using the consider divergent methods cognitive event to think about how to calculate the area of a circle.
  1. Ed wants to determine the area of a circular pond.
  2. He has a long rope that can go around the edge of the pond, or can be pulled across the middle of the pond.
  3. Describe two ways Ed can measure parts of the pond so that he will be able to calculate its area. Explain how to calculate the area from what Ed measures, but do not work a problem.
  4. Circle mathematics terms you use in your description.
The review activity consisted of 16 story problems that required interpretation and set up, and calculations. Four problems were designed for each of the four shapes. For each shape, two problems required finding the area of a shape from dimensions of its parts, and two problems required finding dimensions of a part from the area of the shape.
A problem sequence of “triangle, circle, parallelogram, trapezoid” was repeated four times across the 16 serial positions of the review activity. Within each four-problem set and for each shape, problems that required finding area or finding a dimension alternated across the serial positions.

Design and procedure

All students were given the same problems to solve and the same cognitive events to perform. Only the order of materials differed in the two conditions. In the Advance Organizer Condition, students performed Guided Cognition tasks and then practiced solving the geometry problems. In the Consolidator Condition, students practiced solving geometry problems and then performed Guided Cognition tasks.
The two in-class homework conditions were assigned within each class to control for time-of-day and teacher variables. To balance student ability across conditions, students in each class were rank-ordered by their course grade averages for the first 3   months of the school year. Five months earlier, these students had participated in Experiment 14 where for each successive pair of rank-ordered students, one had been assigned to Condition T, and one had been assigned to Condition GC, and the order of assignment to conditions had alternated in successive pairs. It is highly unlikely that one Guided Cognition homework experience could affect the study of completely different content 5   months later, but we controlled for prior Guided Cognition homework experience by assigning students in Experiment 22 to the Advance Organizer and Consolidator Conditions in alternation down the rank-ordered list. This method balanced prior Guided Cognition homework experience across the two conditions and also resulted in a nearly perfect balance of grade averages across the two conditions.
The mathematics topics were taught as usual for 5   days, and evening homework was assigned as usual each day. The sixth day, the in-class homework was performed. To assure that students spent some time on each part of this homework, it was distributed and collected in three parts. In the Advance Organizer Condition, students worked on two cognitive events during the first 12-minute interval, two more cognitive events during the second 12-minute interval, and four story problems during the third 12-minute interval. Students in the Consolidator Condition did exactly the same in-class homework but worked on the four problems during the first 12-minute interval, two cognitive events during the second 12-minute interval, and two more cognitive events during the third 12-minute interval.
Instructions to the students for the in-class homework were a paraphrase of the following: “Today's homework will be done in class. You may use your reference sheet of formulas and calculators if needed. Please do your own work, without talking to your classmates, and try to complete the work without help. I will pass out the work in three parts. You will have 12   minutes to work on each part. When 12   minutes are up, I will collect your papers and give you the next part. During the class everyone will have the same work, but in different orders.”
Six days later, students were given the previously unannounced review activity. Students were asked to put away their textbooks and class notes, but they were allowed to use calculators and geometric-area formula reference sheets. The review activity was described as an opportunity to see what they knew about the book sections on area and perimeter of the geometric shapes. Students were asked to do their best to complete all the problems and were told that the review activity was good practice for part of a future chapter test. The review activity was graded for use as data but was not used as a school grade.
The experiment timeline was as follows:
Thursday Taught area of parallelograms.
Regular homework was assigned for the evening.
Friday Taught area of parallelograms.
Regular homework was assigned for the evening.
Weekend No new material or homework.
Monday Taught area of triangles.
Regular homework was assigned for the evening.
Tuesday Taught area of trapezoids.
Regular homework was assigned for the evening.
Table Continued

image

Wednesday Taught area of circles.
Regular homework was assigned for the evening.
Thursday In-class homework on all of the above in either the Advance Organizer or Consolidator Condition.
Friday Half-day of school. Taught a different topic.
Saturday–Monday Memorial day weekend. No related homework assigned.
Tuesday Taught new topic, not related to area of above figures.
Wednesday Review activity.

image

Results

A grading rubric was designed and used by both teachers. All work was scored for partial credit according to the rubric. Mean pre-experiment grade averages of 94 students who participated in all parts of the experiment were found to be nearly identical across conditions (Advance Organizer Condition, n   =   47, GPA   =   86.8; Consolidator Condition, n   =   47, GPA   =   87.8, t (92)   =   0.69, p   =   .494, ns).
Students in the Consolidator Condition performed better on the review activity than did students in the Advance Organizer Condition, t (92)   =   2.08, p   =   .040, and also performed better on the in-class homework, t (92)   =   2.01, p   =   .047. Means are shown in Table 4.11.
Because students in the Consolidator Condition performed better on the in-class homework than did students in the Advance Organizer Condition, we separated the homework scores into two parts—scores on the problems and scores on the cognitive events—to determine whether the performance difference was from one part or both parts of the homework. Analyses revealed that average problem-solving performance was not significantly different for the two conditions, with 73.9 and 76.8 percent correct for the Advance Organizer and the Consolidator Conditions, respectively, t (92)   =   0.72, p   =   .473, ns. In contrast, the mean cognitive event performance was better for the Consolidator Condition (69.8 percent correct) than for the Advance Organizer Condition (61.7 percent correct), t (92)   =   2.59, p   =   .011. These results indicate that better performance on the in-class homework was due to better performance on the cognitive events when they occurred after working some problems, as in the Consolidator Condition.

Table 4.11

Mean Percent Correct on the Review Activity and the In-Class Homework for Advance Organizer and Consolidator Conditions.
Advance Organizer Order of In-Class Homework Consolidator Order of In-Class Homework Consolidator > Advance Organizer
Review activity 68.1 75.7 7.6
In-Class homework 66.3 72.4 6.1

image

Discussion

The reported findings are particularly interesting when it is realized that students' learning from the in-class homework was in addition to what they had learned from 5   days of teaching and regular evening homework. Following in-class homework, students performed significantly better on a 6-day delayed previously unannounced review activity when they had worked some problems and then thought about how to work such problems (the Consolidator Condition) than when they had thought about how to work problems and then worked some (the Advance Organizer Condition). Students also performed significantly better on the in-class homework in the Consolidator Condition order of events. More detailed analyses revealed that this difference was due to better performance on the cognitive events part of the in-class homework and not due to a difference in the problem-solving part of the in-class homework. It is likely, therefore, that the order effect we have found for review activity performance reflects better performance on, and therefore better learning from, the in-class cognitive event homework in conditions where working problems is followed by reflecting on how to work them rather than vice versa. It is as if working some problems primes the pump for deeper thinking during the Guided Cognition tasks, and this, in turn, results in better problem-solving performance after a substantial delay. In other words, thinking about the logic, definitions, and mechanics of working problems consolidates knowledge more effectively after students spend some time and effort actually working problems.
Our results suggest modifying standard teaching strategies. Typically, teachers instruct and demonstrate a topic, assign practice problems, and then proceed to the next topic. Our findings suggest that an instructional “sandwich” of teaching, problem-solving practice, and then reflecting on how to solve the problems by engaging in appropriate cognitive events, may significantly improve students' subsequent problem-solving performance.

Conclusions of Guided Cognition Research for Mathematics Homework Design

Several conclusions can be based on the results of the 11 mathematics experiments.
  • 1. Guided Cognition-designed homework has been found to facilitate learning of subject matter in addition to literature. Specifically, Guided Cognition has been shown to facilitate learning mathematics, and within mathematics, Guided Cognition homework has facilitated learning for a variety of topics such as fractions and mixed numbers, integers, and geometry.
  • 2. Guided Cognition homework has been found to be effective for learning mathematics. Including cognitive events with story problem homework increased subsequent long-term performance in solving story problems and also in solving plain numerical problems. These findings indicate that Guided Cognition homework aided understanding, as required to interpret and set up story problems, and also facilitated learning procedural knowledge, as required to work the set up story problems and to work plain numerical problems.
  • 3. The increment in problem-solving performance that results from doing Guided Cognition homework persists for many months as demonstrated by performance on an unexpected 6-month delayed review activity (quiz).
  • 4. Guided Cognition homework has been found to be efficient for learning mathematics. When study time was held constant, students who worked fewer problems (eight) but also performed cognitive events scored as well on later problem solving as students who worked many more problems (24) without performing cognitive events.
  • 5. Learning that resulted from performing cognitive events in place of additional problem-solving practice persists for many months, as measured by performance on an unexpected 14-week delayed review activity (quiz).
  • 6. Merely reading completed cognitive events that were paired with story problems was somewhat beneficial for solving story problems and numerical problems on an unexpected review activity (quiz) a few days later, but the effects were weaker than the benefits found after performing the cognitive events. Furthermore, the benefits did not persist and, in fact, reversed after 5   months.
  • 7. Considering the results of several experiments, it appears that very long-term problem-solving performance is best after Guided Cognition-designed homework with cognitive events that the students perform, next best after Traditional homework, and least after working homework problems and merely reading completed examples of corresponding cognitive events.
  • 8. Guided Cognition homework is beneficial for students of differing mathematics abilities. Guided Cognition homework with story problems facilitated performance in solving story problems and plain numerical problems by low- and average-ability students on an unexpected review activity. For the most part, these benefits were maintained for half a year, as measured on another unexpected review activity.
  • 9. Guided Cognition cognitive events may serve as both advance organizers and as consolidators during homework, but the dominant effect was found to be as consolidators. Cognitive events experienced after working problems were completed better than cognitive events experienced before working problems, and several-day delayed problem-solving performance was better for students' whose homework included cognitive events after working problems.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.23.101.60