Reflections of an Accidental Theorist

 

ALAN H. SCHQENFELD

Many years ago, David Wheeler asked me to write “Confessions of an Accidental Theorist” (Schoenfeld, 1987), in which I described how I had come to examine the research issues on which I had focused.* The SIG/ RME Senior Scholar Award provides me with a wonderful opportunity to reflect once again on those and related issues. I am truly grateful for the opportunity.

The word accidental in the articles' titles refers to the fact that when I began doing both mathematical and educational research, I was “theory-neutral.” In mathematics, unless one worries about foundations (logic), one just goes about one's work: The rules of the game are so well established that one simply forges ahead, working on what one hopes is the next meaningful and significant problem. After all, a proof is a proof is a proof; people schooled in mathematics know what one is and how to produce one. My work in education started near the dawn of cognitive science, and I happily adapted tools from artificial intelligence to the study of human thinking and problem solving. My arguments at that time were essentially empirical. If I thought X was a factor in problem solving, I helped students learn X and observed whether it made them better problem solvers. If it did, then X was obviously important. This stance did not ignore theory, of course—it depended on an information processing perspective rather than a behaviorist perspective, for example—but it made somewhat passive use of it. As I evolved as a researcher, however, I came to realize that being explicit about theory and models helped me clarify what I was trying to understand and to test and refine my ideas. I am now firmly committed to the dialectic between theory and model-based empiricism as a core component of my work.

In what follows, I outline three core principles in my work, as it has evolved. The narrative that follows draws upon my research history for illustrations. I conclude by addressing some issues that the reviewers asked me to discuss.

Core Principles

1. Theory matters. If you take theory and models seriously, then (a) you need to elaborate clearly for yourself “what counts” and how things supposedly fit together, and (b) you must hold yourself accountable to data (see Schoenfeld, 2007). From my perspective, theory is—or should be—the lifeblood of the empirical scientist. Conversely, all educational (more broadly, social science) theory should routinely be tested against empirical data.

2. One makes progress by systematically pushing the boundaries of the problem space in order to see where the theory 'breaks.” That is, it is essential to choose cases for analysis that you think you might be able to understand and that have the following property: If you succeed in explaining them, you will have expanded the scope of the theory, and if you fail, you have found a limitation of the theory.

3. One can make progress by keeping one's eyes open for interesting things. From my earlier article: “Human problem solving behavior is extraordinarily rich, complex, and fascinating—and we only understand very little of it. It's a vast territory waiting to be explored…I'm convinced that…if you just keep your eyes open and take a close look at what people do when they try to solve problems, you're almost guaranteed to see something of interest” (Schoenfeld, 1987, p. 38). I note that interesting is a theoretically laden term: What turns out to be interesting is often what turns out to not quite jibe with one's theoretical expectations, so (cf. point 1) the more explicit one is about one's theoretical commitments, the more likely it is that something interesting will lead to a productive line of inquiry.

On Keeping Your Eyes Open

I begin autobiographically, moving at breakneck speed because my early problem-solving work (e.g., Schoenfeld, 1985, 1992) is well known and much of this story was encapsulated in the earlier article (Schoenfeld, 1987). In the early 1970s, I read P0lya's (1945) How to Solve It. It felt right; P0lya seemed to describe the kinds of problem-solving strategies I was using as a mathematician. But I asked problem-solving experts, and they said, “P0lya doesn't work.” This (interesting) contradiction is what got me started in educational research. I took a postdoc at Berkeley, where my mentor, Fred Reif, offered me a wonderful deal: “Read until you get sick of reading, at which point we will assume you are literate. Then you can start to do research.”

I read Newell and Simon's (1972) Human Problem Solving, in which they looked at human problem solving with an eye toward abstracting regularities in problem-solving performance and implementing those regularities as computer programs. The idea was that computers and humans were information processors. Inspired by P0lya on the one hand and Newell and Simon on the other, I decided to look at people solving problems to see if I could find out what would enable other people (rather than computers) to become more effective at solving problems. This was my first step toward empiricism: I needed to look closely at what people do! Note that this theoretically driven empiricism is a source of inspiration as well as the basis of accountability to data. As I built up theoretical ideas about what counted in problem solving, I formed an empirical rule: Ideas gleaned from the research should be tried out in the classroom, both for inspiration and validation. This, in short, is how one holds theory accountable to data, and vice versa. (The problem-solving research and development I conducted from 1975 to 1985 was, in effect, a decade-long design experiment [Cobb, Confrey, diSessa, Lehrer, and Schauble, 2003; Schoenfeld, 2006]. Theoretical ideas were tested in practice, and both theory and instructional design were modified in the light of performance data.)

What I found first was that P0lya's heuristic problem-solving strategies were much more complex (and therefore more difficult to learn) than he had suggested. For example, a “simple” strategy such as “solve an easier related problem” is not really one strategy; rather, it is a collection of more than a dozen strategies. Note that the theoretical lens of information processing was essential here. I asked this question, “If I start by assuming that the problem solver has typical (human) information-processing capacities, how can I specify any particular strategy so that the problem solver can implement it?” What seemed to be reasonably specified strategies turned out to be vague and ill-defined; I had to go to a finer level of detail for them to be implementable. The fact that the observations were theory-based is what led to the new findings.

Observations revealed the true complexity of implementing P0lya's heuristic strategies. For example, to use the strategy “Make sense of the problem by looking at examples,” one must (a) think to use the strategy, (b) know which version of the strategy to use, (c) generate the appropriate examples, (d) gain the insight needed from the examples, and (e) use that insight to solve the original problem. In light of these findings, it was no surprise that P0lya's ideas had been so difficult to implement. Heuristic problem-solving strategies are difficult to learn and to use, and students need detailed training. However, once one had become aware of the relevant level of detail, the strategies could be taught. My college students were amazingly successful (see Schoenfeld, 1985, for details).

Solving the grain-size problem raised a previously unseen problem.1 Each of the two dozen or so powerful strategies in P0lya's How to Solve It was, in itself, 10–20 strategies, meaning that students had to learn hundreds of strategies. Controlling all these strategies was a challenge—one had to have a strategy for figuring out which strategies to try and when to try them. This led me to the study of monitoring and self-regulation, an aspect of metacognition. I discovered how serious this issue was when I asked students to solve problems before they took my problem-solving course. More than 60% of their attempts consisted of reading the problem, picking an approach, and pursuing that approach until they ran out of time. Absent the reconsideration of unsuccessful approaches—and a very large percentage of the initial approaches chosen by students were unsuccessful—the students were guaranteed to fail.

Once I was aware of this problem, it could be addressed. As my students worked on problems in class, I pestered them repeatedly with these questions: What are you doing? Why are you doing it? and How will it help you solve the problem? Over time, the students internalized these questions and improved at monitoring and self-regulation. But this was not enough.

Of fundamental interest to me was the question of what caused students difficulty. Of course, it is not very interesting if students fail to solve a problem because they do not have adequate domain knowledge. Thus, when I had students work problems in my lab, I chose problems they should have been able to solve. At the time, plane geometry was a required course for 10th-grade, college-intending students. Thus, my students should have known enough mathematics to be able to solve straightforward plane geometry problems.

I gave them a simple straightedge-and-compass construction problem. To my great surprise, the students approached the problem in purely empirical fashion. They made conjectures and then tried the constructions they had conjectured might work, in order to see whether they did. In clinical interviews, I discovered that the students actually knew a substantial amount of geometry—they were able to derive the properties of the desired circle. But then, when asked to do the construction, they ignored the result that they had just proved! This observation of anomalous behavior led to the study of beliefs. By now this story is familiar, so I will not repeat it (see Schoenfeld, 1985, 1992). Once again, the observation of unexpected behavior led to a new set of studies addressing the question, Where did such counterproductive behavior come from? To explore this issue seriously required that I spend a fair amount of time observing instruction in local schools, where I discovered that student beliefs originated in their experiences with school mathematics. That is, the practices in which they engaged were the source of their beliefs (Schoenfeld, 1988, 1992).

By the late 1980s, I had found that the following were major determinants of problem-solving success or failure: the knowledge base, heuristic strategies, metacognition (specifically, monitoring and self-regulation), beliefs, and practices. However, I did not have a theoretical description of how and why people made the choices they did while solving problems. That was the next, and most fundamental, question—and the major goal of my research agenda.

On Pushing the Boundaries of the Problem Space

The first decade of my research in education was devoted to the study of people solving (mathematical) problems in isolation, in the research laboratory. This obviously artificial setting was, to put things simply, a reflection of the state of the art: Given the research tools and techniques available in the 1970s and 1980s, it was all we could do to study thinking in isolation. The major goal, of course, was to understand thinking and problem solving in general—in any problem-solving context in any domain. Understanding this calls for developing a theory of human decision making. (Members of my research group know that my long-term research goal—sometimes glimpsed over the horizon, sometimes not— has been the “theory of everything,” or TOE.) Changing directions somewhat and moving to the study of tutoring allowed me to delay a head-on attack on the vexing problem I had been unable to solve (how and why people make the decisions they do). However, at the same time it moved my overall agenda forward. Tutoring is a complex form of problem solving in which mathematical decision making and social interactions are involved. Thus, my work expanded into the social domain. In addition, I had the longer term goal of understanding teaching. Tutoring is a stepping-stone toward that larger goal, in that it is less complex but involves some of the same complexities of decision making.

We brought a student into my lab for some tutorial sessions. The social interactions in the session tapes were messy, rendering these sessions unsuitable for extended analysis, I thought, but Abraham Arcavi said, “Alan, there are really interesting things going on in these tapes. We should take a closer look” (personal communication, December 7, 1986). Abraham, Jack Smith, and I did. The student's errors, which were resistant to straightforward tutoring, turned out to be rooted in some very deep misconceptions. Her incorrect, but robust, understandings shaped what she saw, what she remembered, and what she forgot. Specifically, new pieces of information that did not fit with what she knew tended to fade away, even though the new things were correct and what she knew was wrong. When our analyses were completed, we had a new article (Schoenfeld, Smith, and Arcavi, 1993), a new way of thinking about people's knowledge structures and how they changed, and some new methods (microgenetic analyses) for charting that growth and change.

I want to emphasize that this work embodied all three of the core principles discussed in the introduction. First, the work was theory driven. The cognitive-science approach to representing knowledge as networks of nodes and connections allowed us to conceptualize and chart the changes in the student's knowledge structures—and, ultimately, to see the limitations of the perspective with which we had started. (Specifically, if freshly learned material did not connect to established learning structures, it faded away—and, because it did fit, old mal-knowledge that had been temporarily replaced by the correct knowledge could regenerate itself. Because her knowledge structures, including incorrect ones, were robust, they resisted change and it took major work to undo the mal-knowledge.) Second, what was interesting—the robustness of the student's mal-perceptions—was interesting in part because we had theoretical expectations, and her learning trajectory violated them. Third, as noted, we were expanding the space of inquiry into the social (although still in the lab. One step at a time…).

At the time, there were two bodies of research on tutoring. One focused on subject matter. It looked closely at student understanding and how to move it forward, but it ignored “human factors.” A second focused on human factors, examining issues such as intrinsic versus extrinsic motivation. However, there were no connections between them, and that made no sense. Sometimes, something a student says or does requires an immediate mathematical response, for example, when the student says “(a + b)2 = a2 + b2.” Of course, how the tutor responds to a statement like this depends very much on what the tutor knows (“What options do I have to address this misconception?”) and what he or she believes (“Do I need to work carefully through this, or just give the student the correct formula?”). Sometimes, something a student says or does requires an immediate personal response, for example, when the student looks weary or disheartened. Here, too, how the tutor acts depends very much on what the tutor knows (“What can I do to restore equilibrium?”) and believes (“How important is it to pursue the content? How important is it to be sympathetic and back off for the moment?”). But in both cases, whether the event is content-related or affect-related, something has happened that causes the tutor to consider/ reconsider how things are going. On the basis of his or her knowledge and beliefs, the tutor either puts new goals in place or continues to pursue the current high-priority goals. That is, the tutor's evolving top-level goals determine the course of action. Such a goal-directed architecture allowed us to model tutoring decisions and to unify the two literatures. And it led to the question (and the next expansion of the space), Might this architecture be the correct one to model teaching?

Although classroom environments are typically much more chaotic than tutoring environments, the basic question is the same: Why do teachers make the choices they do? We hypothesized that the answer is the same: A teacher enters with a plan, and then makes adjustments on the basis of (a) what happens, (b) beliefs about what is important to pursue, and (c) the knowledge that she or he can bring to bear.

Once again, we had good luck. Mark Nelson, a student teacher in our teacher preparation program, said to the head of the program, Dan Zimmerlin, “I didn't like the way today's lesson went. Can you help me understand why?” Zimmerlin said, “Bring the tape to Alan's research group. He wants to study teaching.” Four months later, we understood the problem, and we were able to explain why Nelson had done what he did (see Schoenfeld, 2000; Zimmerlin and Nelson, 2000). It turned out that Nelson's pedagogical choices were a function of his knowledge, goals, and beliefs. (His beliefs determined what he would and would not do in the classroom. In this case, they kept him from using some of his knowledge.) We hypothesized that this was the case in general.

It was time to choose a new tape for analysis. In line with Core Principle 2, “explore different dimensions of the problem space,” I needed to choose an example that was significantly different from the tape we had analyzed. Nelson was a new teacher, teaching a traditional lesson. So I needed a tape of an experienced teacher.Yet again, there was good luck: Emily van Zee, who was doing a postdoc at Berkeley and attending my research group, brought in for discussion a tape of Jim Minstrell's physics teaching. Minstrell is a very well known teacher-researcher, and van Zee and Minstrell had written an article about his teaching style.

I asked Minstrell if I could use his data for an independent analysis. He said yes, and a year later we had modeled the full hour of instruction. There is a huge amount of detail in the analysis (see Schoenfeld, 1998). Here I will simply point to the main aspects of the analysis. The formal content of Minstrell's lesson involves the use of mean, median, and mode. But the main point of the lesson is that he wants his students to see that such formulas need to be used sensibly. The previous day, eight students had measured the width of a table, obtaining the values 106.8, 107.0, 107.0, 107.5, 107.0, 107.0, 106.5, and 106.0 cm. Minstrell wanted the students to discuss the “best number” for the width of the table: Which numbers—all or some—should they use? How should they combine them? With what precision should they report the answer? He had a flexible script for each part of the lesson: (a) raise the issue; (b) ask for a student suggestion; (c) clarify and pursue the suggestion by asking questions, inserting some content if necessary; (d) once this suggestion has been worked through, ask for more suggestions; and (e) when students run out of ideas, either inject more ideas or move to the next part of the lesson. We analyzed the lesson in fine detail— decomposing the lesson into smaller and smaller episodes, noting which goals were present and at what levels of activation, and observing how transitions corresponded to changes in goals. When things went “according to plan,” the lesson was easy to model.

But what can one say when things do not go according to plan? This happened when Minstrell was reviewing various ways to compute the “best value” to represent the eight measurements given above. The class had discussed mean and mode when a student raised her hand and said,

This is a little complicated, but I mean it might work. If you see that 107 shows up four times, you give it a coefficient of 4, and then 107.5 only shows up one time, you give it a coefficient of 1, you add all those up, and then you divide by the number of coefficients you have.

There is a wide range of possible responses, ranging from “That's a very interesting question. I'll talk to you about it after class” to “Let's make sure we all understand what you've suggested and then explore it.” Each has different entailments for how the class will play out. The research challenge: Is it possible to say how Minstrell will respond?

According to our model, Minstrell's fundamental belief about his physics teaching is that physics is a sense-making activity and that students should experience it as such. One of his major goals is to support inquiry and to honor student attempts at figuring things out. Minstrell's knowledge base includes favored techniques such as “reflective tosses,” in which one asks questions that get students to explain or elaborate on what they have said. Thus he will choose to pursue the student's suggestion, using reflective tosses (for details, see Schoenfeld, 1998). We modeled Minstrell's decision using a form of subjective expected utility (or cost-benefit analysis).2 This form of modeling has worked consistently in a variety of situations in which we have tried to capture nonroutine decision making.

What next? At a meeting I saw a video of Deborah Ball teaching a third-grade class (the “Shea number” tape). The lesson was amazing. In it, the third graders argued on solid mathematical grounds; the discussion agenda evolved as a function of classroom conversations; the teacher seemed at times to play a negligible role; and sometimes she made decisions that people have said did not make sense. In addition, I had little or no intuition about what happened. Given Core Principle 2, this was ideal. There were major differences from previous tapes we had studied in grade level, content, psychological (developmental) issues, classroom dynamics, the “control structure” for the classroom, and the teacher's role. What a challenge!

Three years later, our analyses showed that Ball employs a “debriefing routine” that consists of asking questions and fleshing out answers in a particular way—and that she used that routine five times in the first six minutes of class. What seems somewhat unstructured on casual viewing turns out to be very highly structured. Moreover, Ball's controversial decision (in which she led a student on a mathematical excursion that ran the risk of derailing her own announced agenda) can be modeled as a principled move entirely consistent with her larger agenda—once you know that the success of the next part of her lesson hinged on students' understanding of the issue she discussed with the student.

Ball's lesson segment has been modeled on a line-by-line basis (see Schoenfeld, 2008). In addition, once the modeling was done, some very interesting consistencies between Ball's routine for getting students to clarify their understandings and Minstrell's interactive routine became apparent. After completing the analysis, I realized that I use a variant of the same routine in teaching my problem-solving courses. It may well turn out to be a general, learnable routine for supporting highly interactive, student-centered classrooms (Schoenfeld, 2002). Moreover, given that the theory of teaching-in-context (the claim that a teacher's in-the- moment actions can be modeled as a function of the teacher's attributed knowledge, goals, beliefs/orientations, and a particular kind of decision making) had proved successful in allowing us to model three radically different cases of teaching (Nelson, Minstrell, and Ball), the theory was demonstrably robust.

What next? There are a number of possible directions, some of which I am pursuing now and some of which I hope to pursue in the future. The one on which I am working at present is an abstraction of the theory of teaching-in-context, a general theory of human in-the-moment decision making. Teaching is somewhat special in the complexity of its interactions, but in many ways it typifies knowledge-intensive domains in which practitioners engage in a substantial amount of well-practiced behavior, punctuated by episodes of nonstandard decision making. Other domains that can be characterized in this way include cooking, for example, preparing and cooking a meal; crafts such as automobile mechanics and electronic troubleshooting; and routine medical diagnosis and practice (see the extensive Artificial Intelligence literature on how doctors' orientations to patients and disease shape their diagnoses, how routine is followed, etc.). Assuming that human brains work the same way across similar domains, then the architecture of the theory of teaching-in-context should apply in those arenas as well.3 In short, I claim that goal-oriented “acting in the moment”—including problem solving, tutoring, teaching, cooking, and brain surgery—can be explained and modeled using a theoretical architecture in which the following are represented: an individual's knowledge, goals, “orientations” (an abstraction of beliefs that includes values, preferences, etc.), and decision making (captured in an “internal calculus” that can be modeled as a form of cost-benefit analysis). I hypothesize that things work as described in Box 1.

Given this claim, in what domains beyond teaching might I test it? Medicine was one of the domains on my list. I like my doctor, and I know that she is intellectually curious, so I asked her whether I could tape one of our routine visits and analyze it. She said yes. As it happens, it was easy to model her actions. Like most general practitioners, she has a family of “disease-related scripts” that govern her interactions with patients who have known diagnoses. I have adult-onset diabetes, and it is easy to see how her actions conform to a “Type II diabetes script,” in which she works through the numbers on my most recent lab tests with me and exhorts me to be a better patient than I am inclined to be. Modeling her actions (which are much less complex in this situation than those of a teacher handling a whole class) provided some confirmation of the generality of the approach. (See Schoenfeld, 2010b, for the general argument and the model of the diagnostic interaction. Note as well that there are extensive psychological and Artificial Intelligence literatures on medical diagnosis.)

Beyond that, the interaction with my doctor had been very productive. This raised some interesting questions. Could I understand why it had been so productive? Because there were only two participants in the conversation, it seemed reasonable to model both.When I modeled doctor and patient (with regard to the categories of analysis—knowledge, goals, orientations, and decision making), it turned out that there was an excellent match between our goals. In particular, when one participant acted in a way that made a goal clear, the other participant picked up on the goal and made it his or her own. Thus, goal synchronization appears to be a major factor in making the conversation productive!

Box 1. How THINGS WORK IN OUTLINE

•  An individual enters into a particular context with a specific body of knowledge, goals, and orientations (beliefs, dispositions, values, preferences, etc.).

•  The individual orients to the situation. Certain pieces of information and knowledge become salient and are activated.

•  Goals are established (or reinforced, if they pre-existed).

•  Decisions are made, consciously or unconsciously, in pursuit of these goals.

(a) If the situation is familiar, then the process may be relatively automatic, in which case the action(s) taken is (are) in essence the access and implementation of scripts, frames, routines, or schemata.

(b) If the situation is not familiar or there is something nonrou- tine about it, then decision making is made via an internal calculus that can be modeled by (i.e., is consistent with the results of) the subjective expected values of available options, given the orientations of the individual.

•  Implementation begins.

•  Monitoring (whether effective or not) takes place on an ongoing basis.

•  This process is iterative, down to the level of individual utterances or actions.

(a) Routines aimed at particular goals have subroutines, which have their own subgoals.

(b) If a subgoal is satisfied, the individual proceeds to another goal or subgoal.

(c) If a goal is achieved, new goals kick in via decision making.

(d) If the process is interrupted or things do not seem to be going well, decision making is activated once again. This may or may not result in a change of goals and/or the pathways used to try to achieve them.

Source: From Schoenfeld 2010a.

What next? There are a number of possibilities. First, this commentary and my recent book (Schoenfeld, 2010b) make a general claim regarding the architecture of people's in-the-moment acting and decision making. A significant amount of empirical work needs to be done to test and refine that claim. Second, I think that the idea of goal synchronization discussed in the previous paragraph (see also chapter 7 of Schoenfeld, 2010b) has great promise as a theoretical and empirical tool. What makes for a highly productive classroom? I strongly suspect that goal synchronization (between students, and between students and teacher) is a significant part of that story. The analytical tools developed in the chapter that analyzes my conversation with my doctor are general, and I think they can be used productively for the study of classroom interactions. This could provide a powerful way to integrate cognitive and social analyses. Finally, there is the issue of integrating a theory of learning into the current theory. At present, the theory is about acting in the moment: The individual makes decisions on the basis of his or her current knowledge, goals, and orientations. A logical next step is to build models of acting-in-the-moment that incorporate ongoing changes into their descriptions of current knowledge, goals, and orientations. For example, as a result of a classroom dialogue, a teacher might know or believe more than before about a particular student. The teacher might think that a certain approach to a topic is less useful than she or he had previously thought, and have in mind modifications or variations to try the next time. In reality, a teacher's knowledge, goals, and orientations are being continuously updated. It would be interesting and potentially useful to build models that take this kind of learning into account.

Discussion

I return to the three core principles outlined at the beginning of this article and briefly address two issues raised by reviewers. Core Principle 1, the importance of taking theory seriously, is absolutely central. That theme has permeated these reflections, so I will merely summarize some of my main assertions here. First, it is when issues are couched in theoretical terms that one can hypothesize and test their generality. Second, one's theoretical commitments, whether tacit or explicit, shape what one sees and deems important. Thus, there is much to be gained (and many pitfalls to be avoided!) by being explicit about one's assumptions. Third, making sure to cover all aspects in the theory-model-empirical- data triad is an extremely powerful way of improving and refining one's ideas. From my perspective, taking theory seriously means holding oneself accountable to empirical data. (Armchair educational theory is about as useful as armchair philosophy.) Moreover, how one holds oneself accountable to data is critically important. It is easy to provide ad hoc explanations of individual events, and this is dangerous. Theoretical commitments (along with their instantiations as context-specific models) guard against ad hocism. It is one thing to claim to have a general theory of teaching, problem solving, or decision making. It is quite another to build a model, or even the outline of a model, that uses only constructs sanctioned by the theory and that “captures” the behavior that is being modeled in some significant way. Taking the theory-model- empirical-data triad seriously is thus a way of keeping oneself intellectually honest and also making progress.

Core Principles 2 and 3 lie at the heart of a productive empirical research program. Much of this article's narrative has been organized along those two lines (see also the following), and I hope their import is clear. It is worth noting, once again, that both principles are deeply theory-laden. One's theoretical perspective structures the dimensions of the problem space, so a systematic exploration of those dimensions (Principle 2) is de facto theoretically driven. And, as noted, the most interesting and potentially productive observations (Principle 3) are the ones that do not quite fit with our theoretical expectations and compel us to take a closer look at what is happening.

Reviewers asked whether I would comment on steps one can take to build a productive career, and to discuss some of the ways in which my research group works. For the first, I would reformulate some of the statements above as recommendations. Personally, I think it is essential as a researcher to work on a big problem that one thinks is truly important and about which one cares deeply. There are a sufficient number of big problems on which to work, and the choice is a matter of taste: One could concern oneself with teachers' professional development, helping middle school students (especially disenfranchised students) come to grips with middle school mathematics in meaningful ways, or understanding students' mathematical learning disabilities, to name just three. Next, one has to find a toehold—a part of the problem on which one can make tangible and meaningful progress. One should then focus one's attention on the manageable subproblem while keeping the larger problem in the back of one's mind. My experience has been that once the subproblem has been solved, one sees more clearly and is in a better position than before to make interesting observations, produce generalizations, or explore the problem space. In consequence, addressing the big problem becomes an iterative process: Each solution raises new questions or makes previously intractable questions potentially approachable. This kind of approach, I believe, guarantees that one has plenty to work on while maintaining a sense of direction.4

That approach, along with one major addition, shape both the raison d'etre and modus operandi of my research group. I see my primary role as a mentor, helping talented young scholars learn to harness their passions in the ways described in the previous paragraph. (The topics I mentioned in that paragraph are each the foci of current members of the group, and they are rich enough to keep those group members productively engaged for many decades.) The missing ingredient for young scholars is one that comes with experience: learning how to take a complex problem and find the right toehold. An important problem is complex and messy; the challenge is to figure out how to address a part of it that is meaningful and manageable. Thus, much of my advising consists of discussing such issues—most often in the whole group rather than in individual sessions. Group members bring their work—at every stage of the work from initial conception through multiple reconceptualizations, to initial data gathering and interpretation (and yet more reconceptualizations) through final paper or dissertation writing—and all of us collectively discuss that work-in-process (including mine) (see Schoen- feld, 1999, for some detail). The collective discussions provide a mechanism for initially watching, and then, over time, becoming increasingly engaged. The result is an apprenticeship into the habits of mind that I hope and believe will serve the students well as researchers. I can only wish for them as much fun in pursuing the issues about which they care as I have had doing the same.

Acknowledgments

I am grateful to the reviewers and to Ed Silver, whose suggestions induced me to turn what was in essence a chronological narrative into a somewhat more traditionally structured research commentary. I also thank Cathy Kessel, Noreen Greeno, Yoshi Shimizu, Marty Simon, and the extended functions group for their comments and suggestions.

Notes

1. This is a general and critically important issue. Almost always, coming to a deeper understanding of some phenomenon allows one to see things that were hitherto invisible. Thus, if one is working on a large and significant problem, it is likely to unfold gradually as aspects of the problem are addressed successfully.

2. I hasten to add that modeling in this way does not presuppose that all teaching decisions are “rational.” Indeed, subjective expected utility turns out to be an ironically “rational” way of capturing the consistent irrationalities in people's decision making!

3. Note that the expression knowledge-intensive domains, in which practitioners engage in a substantial amount of well-practiced behavior, punctuated by episodes of nonstandard decision making, is theoretically laden, as is the assumption that cognitive architecture will be the same for cooking, automobile, and medical practice. Once again, it is a set of underlying theoretical assumptions that provide the basis for generalization and abstraction.

4. I say this upon reflection; I certainly cannot say that I followed this rule explicitly, although I seem to have been true to it. In that sense, this “Research Commentary” is also the reflection of an accidental metatheorist.

*This research commentary is derived from my AERA SIG/RME Senior Scholar Award Presentation at the annual meeting of the American Educational Research Association, April 13–17, 2009, San Diego, California.

References

Cobb, P., Confrey, J., diSessa, A., Lehrer, R., and Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9–13.

Newell, A., and Simon, H. A. (1972). Human problem solving. Englewood Cliffs, N.J.: Prentice-Hall.

Polya, G. (1945). How to solve it: A new aspect of mathematical method. Princeton, N.J.: Princeton University Press.

Schoenfeld, A. H. (1985). Mathematical problem solving. Orlando, Fla.: Academic Press.

Schoenfeld, A. H. (1987). Confessions of an accidental theorist. For the Learning of Mathematics, 7(1), 30–8.

Schoenfeld, A. H. (1988). When good teaching leads to bad results: The disasters of “well taught” mathematics classes. Educational Psychologist, 23, 145–66.

Schoenfeld, A. H. (1992). Learning to think mathematically: Problem solving, metacognition, and sense-making in mathematics. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 33^70). NewYork: Macmillan.

Schoenfeld, A. H. (1998). Toward a theory of teaching-in-context. Issues in Education, 4(1), 1–94. Retrieved October 21, 2009, from www-gse.berkeley.edu/faculry/AHSchoenfeld/AHSchoenfeld.html.

Schoenfeld, A. H. (1999). The core, the canon, and the development of research skills: Issues in the preparation of education researchers. In E. C. Lagemann and L. S. Shulman (Eds.), Issues in education research: Problems and possibilities (pp. 166–202). San Francisco: Jossey-Bass.

Schoenfeld, A. H. (2000). Models of the teaching process. Journal of Mathematical Behavior, 18, 243–62.

Schoenfeld, A. H. (2002). A highly interactive discourse structure. In J. Brophy (Ed.), Advances in research on teaching: Vol. 9. Social constructivist teaching: Its affordances and constraints (pp. 131—70). NewYork: Elsevier.

Schoenfeld, A. H. (2006). Design experiments. In P. B. Elmore, G. Camilli, and J. Green (Eds.),

Handbook of complementary methods in education research (pp. 193—206). Mahwah, N.J.: Erlbaum.

Schoenfeld, A. H. (2007). Method. In F. K. Lester Jr. (Ed.), Second handbook of research on mathematics teaching and learning (pp. 69—107). Charlotte, N.C.: Information Age.

Schoenfeld, A. H. (2008). On modeling teachers' in-the-moment decision-making. In A. H. Schoenfeld (Ed.), Journalfor Research in Mathematics Education monograph series: Vol. 17. A study of teaching: Multiple lenses, multiple views (pp. 45—96). Reston, Va.: National Council of Teachers of Mathematics.

Schoenfeld, A. H. (2010a). How and why do teachers explain things the way they do? In M. K. Stein and L. Kucan (Eds.), Instructional explanations in the disciplines. NewYork: Springer.

Schoenfeld, A. H. (2010b). How we think: A theory of goal-oriented decision-making and its educational applications, NewYork, Routledge.

Schoenfeld, A. H., Smith, J. P., III, and Arcavi, A. A. (1993). Learning: The microgenetic analysis of one student's evolving understanding of a complex subject matter domain. In R. Glaser (Ed.), Advances in instructional psychology (Vol. 4, pp. 55—175). Hillsdale, N.J.: Erlbaum.

Zimmerlin, D., and Nelson, M. (2000). The detailed analysis of a beginning teacher carrying out a traditional lesson. Journal of Mathematical Behavior, 18, 263—79.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.116.51