10

Conclusion

Problems, problems

We began by looking at what constitutes a problem and ended with what goes on in the brain when people try to solve problems. With experience we learn to deal with everyday problems, particularly the biologically primary ones, to ensure our survival. We can develop general skills such as driving cars, sewing, operating machinery, putting up curtains, typing and so on. We also develop specific skills such as playing the electric guitar, solving complex mathematical equations, designing bridges, repairing satellites and the like. So situations or tasks that were once problematic become automatic or straightforward or at least tractable. At first, not knowing what aspects of a situation are relevant, not knowing how to get round constraints or not being aware of what operators can be applied mean that our understanding of a situation or problem is impoverished – we have a problem. Experience teaches us how to operate inside constraints, what operators to apply and when to apply them, how to respond often automatically to a given situation – and the problem goes away. We know what to do to achieve our goal and this is accompanied by changes in the areas of the brain involved.

Constraints

When a goal is blocked you have a problem; when you know ways round the block or how to remove it, you have less of a problem. Often the blocks are caused by the constraints that are imposed by the problem itself (the Tower of Hanoi wouldn’t be a problem if it weren’t for the constraints; jetting off for a holiday in Mexico wouldn’t be a problem if it weren’t for the constraints of money, job or small infants to look after). And the environment can impose unexpected constraints – a task becomes difficult when there is a lot of loud noise or when the weather turns bad. Then there are constraints imposed by the solver. An inability to solve some problems may be due to unnecessary constraints that were not mentioned in the problem statement, or that were due to the way you represented the problem in the first place. Aspects of a problem that appear salient or important may just be the result of mental set. Insight problems provide examples, but the same issues can occur in everyday problems.

It can happen that we don’t even realise we are suffering from constraints that aren’t necessarily there. To take an example from an unusual domain, in mainland Britain, knitting a pullover tends to involve a couple of knitting needles to generate 2D panels that get sewn up. This constrains one’s ability to test how well it fits and leaves a seam. An alternative is using a single long pliable needle and knitting in the round, creating a 3D shape and avoiding seams. Furthermore, the knitter can start from the neck and work down or even start halfway up the body and knit up to the neck. The garment can then be tried on to see how well it fits as the knitting progresses from halfway down.

Operators

In order to solve problems you either need to know what to do or use some kind of heuristic to get round the fact that you don’t exactly know what to do. If the operators are unknown to begin with, then you have an ill-defined problem and you may have to rely on a generate and test strategy or even a trial and error strategy. Either way, based on whatever representation you generate of the problem, you have to retrieve relevant operators from long-term memory or rely on the environment to constrain or dictate what you can do. For example, try Activity 10.1.

Activity 10.1

I have a bottle of wine that costs £10. The wine in the bottle costs £9 more than the bottle itself. How much does the bottle cost?

It goes without saying that you are also likely to encounter difficulties when the domain is unfamiliar. A barrier to successful problem solving is if you have the relevant operators in long-term memory but retrieve the wrong ones. You might add instead of subtract or the wording might trigger an operator that is not appropriate. In Activity 10.1 there are two numbers (10 and 9) and the wording says “more than”, which seems to be salient and triggers a subtraction operator, so it is an almost automatic response to subtract the 9 from the 10. If you consider a different problem such as this: “I have a bottle of wine that costs £10. The wine in the bottle costs £9. How much does the bottle cost?” The answer in this case is £1, but it’s not the same problem as the one in Activity 10.1. Or how about if we changed the problem slightly: “I have a bottle of wine that costs £10. The wine in the bottle costs £9 more than the bottle itself. How much does the wine cost?” This is much more likely to lead to the correct answer than the original version without the bafflement that the first causes when the solver is told that the bottle does not cost £1.

In insight problems you might know what operators to apply but don’t realise that you know them, which is a paradoxical case of “unknown knowns”. Although domain knowledge is more important than analogical reasoning (Novick & Holyoak, 1991b), sometimes finding a new metaphor or analogy opens up a new set of potential operators. Schön (1993) gives the example of a product-development team working for a paintbrush manufacturer. They were at first unable to get synthetic bristles to work as well as natural bristles. After watching decorators painting close to the edges of walls using slight jabbing motions with the brush, someone came up with the idea that a paintbrush was actually a kind of pump. This metaphor gave rise to a whole new research endeavour no longer focussing on the bristles but on the spaces between them.

Another way of ensuring that you apply the correct operators in a new domain is to use a previous example. Indeed, to ensure that you use the correct operators in an unfamiliar domain a useful way of proceeding is to copy the example as much as possible. This is imitative problem solving. There are two drawbacks to this strategy: one is the tendency to over-transfer analogical transfer:over-transfer (Reed & Ettinger, 1987; Robertson, 2000), where irrelevant detail is transferred across from example to exercise problem, and the other is that imitating a problem too closely makes it difficult to adapt the example solution where necessary. In a study by Robertson (2000) children were given an example where two cars left a location at different times and the second vehicle overtook the first. If the example gave a one and a half hour difference as 3/2, some of the children would change the 2-hour difference in the exercise problem into 4/2 even though that was completely unnecessary. They converted it because that’s what happened in the example they were imitating. The moral of the story is that when you are unsure what you are doing it’s best to keep your inductions conservative.

Goals

Problems vary in the nature of their goals. In some the answer is given and the solver has to find out how to get there. In others the goal may be only vaguely stated but you would probably recognise it when you see it. Thus in an algebra problem where the goal is to find the value of x, as soon as you end up with “x = something” you have got an answer. Similarly, if you are trying to find a catchy name for a new product, you will know if you’ve got one when you’ve got one after some evaluation of what you’ve come up with. Of course there’s going to be some kind of test against explicit or implicit criteria that will tell whether the goal is adequately satisfied. Other goals are even vaguer still. You might have no detailed idea what you are going to end up with until you have finished. Some forms of artistic creation fall into this category.

Salience

The way you go about trying to solve a problem or making a decision or even just about what to pay attention to depends on the salience of elements in the task environment and what comes most readily to mind in a given situation. Features of the environment that stand out in some way are likely to be relevant or important. Although paying attention to what appear to be the salient features of the environment is an extremely useful heuristic most of the time, there are times when it can lead you awry (Kahneman, 1991), as in the wine bottle example in Activity 10.1. Our perceptual systems have evolved over millions of years to allow fast recognition of objects and faces; as a consequence they are also prey to biases in the form of visual illusions. Similarly, the way we read sentences can lead to initial misunderstandings as in “the old man the boats.” Similarly, the kinds of trick questions you get in puzzle books rely on the fact that certain features stand out and influence how you respond (If joke is spelt J 0 K E and folk is spelt F 0 L K, how is the white of an egg spelt?).

Solving insight problems or generating a creative solution to a problem often involves making some hitherto irrelevant feature salient. In Schön’s example of the new paintbrush the spaces between the bristles suddenly became important rather than the bristles themselves (that’s where the paint gets pumped out of). Variability in the way different features of things stand out for different people is due to the fact that people vary in their experience, which means that given a certain stimulus (an insight puzzle, an X-ray plate, a landscape to paint) different features often stand out – often perceptual features – from those that one might have focussed on previously. Thus previously ignored cues in a problem might remind you of a useful analogy, a previous example problem, or a previously encountered case. It manifests itself in expert–novice differences but also works at smaller timescales than the years expertise takes to develop. In M.T.H. Chi et al. (1981) study of expert–novice differences, the features experts found salient were different from those novices found salient, and salient features of a situation can trigger a learned procedure (a lever on the right-hand side of a car’s steering wheel can be flicked down to signal a right turn). When circumstances change these features may no longer be relevant (the windscreen wiper switches on).

Of course, in an unfamiliar situation, relying on surface features is usually reliable as they often reflect underlying structural features. If something you have never seen before has feathers, the chances are that it can fly. This is an example of a diachronic rule (Holland & Holyoak, Nisbett & Thagard, 1986). On the other hand, basing decisions or other forms of behaviour on a human being’s skin colour, their nationality or their sex would be a very silly thing to do since these features tell you absolutely nothing about an individual (see, e.g., Hinton, 2000).

Representation

Salience is one of the factors that influence the way we represent the world including the problems we face. A house buyer, an architect and a burglar looking at a house are going to find different aspects salient (Anderson & Pichert, 1978). People are therefore going to generate different representations of problems and situations depending on their past experience. That said, it is still possible to manipulate the likelihood of a solution by manipulating the instructions (Hayes & Simon, 1974; Simon & Hayes, 1976). Spin doctors manipulate information in an attempt to get us to represent it in certain ways.

Problem solving can’t begin until there is a mental representation of the problem for the solver to work on. A representation generated from the task environment will include text-based inferences, information and motivation we already possess retrieved from our vast semantic network in long-term memory, and the operators cued by information in the task environment. Representing a problem in terms of the kinds of things you can do and the kinds of problem states you might reach is known as understanding in Newell and Simon’s (1972) scheme. In knowledge-lean problem solving the initial understanding forms the basis for a search through the problem space. In novice problem solving an initial mental model is generated by translating the text of a problem. For experts the process is somewhat different since “expertise allows one to substitute recognition for search” (VanLehn, 1989, p. 592). For experts, features of the problem statement trigger appropriate schemas which in turn indicate the appropriate solution procedure, which VanLehn refers to as the “second half of the schema” (VanLehn, 1989, p. 548).

Transfer

The probability that learning will be transferred from one context to another depends on the representation formed of both the target in short-term working memory and the source in long-term memory. The representation one forms of a problem will contain aspects of the context in which it was learned. The role of context in recall is well known. If you misplace something, going back to the place where you had it last will help you remember what you did with it. In exams students often find that they can’t quite remember the bit of information they need but they do remember that it is on the bottom right-hand corner of a left-hand page in the textbook. The position of the information required is irrelevant but it is stored in the memory trace nonetheless.

The context therefore often determines whether a particular source is likely to be accessed in the first place. If a relevant source problem can be found then it needs to be adapted to the current situation. When this happens and a solution can be found we have an example of positive transfer. If, on relatively rare occasions, learning something impedes our learning of something new, then we have an example of negative transfer. Examples of negative transfer include Einstellung and functional fixedness. Together they mean that our patterns of activity, our habits, or the use we make of tools and materials can blind us to alternative and potentially more effective courses of action or functions. There is a danger of over-emphasising the negative aspects of well-learned procedures. They are extremely useful and over 99% of the time they will allow us to achieve our goals.

The greater the role context plays in transfer the more the transfer will be specific. General transfer, on the other hand, involves transferring knowledge or skills from one or more contexts to ones that are in some ways superficially dissimilar. This kind of knowledge has to be decontextualised to some degree to allow transfer to happen. There is evidence that, at least for some tasks, specific transfer occurs when there is an overlap in production rules in the source and target. Much the same goes for general transfer. Learning to search psychology databases to find relevant articles for an experimental report should help the student search archaeology databases to write a report in that domain. Schunn and Anderson (1996) give an example of transfer by “task domain” experts. Novick has shown that there can be transfer of highly abstract representational methods (Hurley & Novick, 2006; Novick, 1990; Novick & Hmelo, 1994), and Pennington and Rehder (1996), Müller (1999), and Tzuriel (2007) have emphasised the importance of conceptual transfer. Analogical transfer depends on there being some kind of similarity between source and target. They can have the same objects or surface features or they can share the same underlying structure. Gentner’s structure mapping theory (Falkenhainer et al., 1989; Gentner, 1983; Gentner, Anggoro, & Klibanoff, 2011) explains how the effects of the surface features of two situations can be overcome allowing a hierarchical structure from one situation to be mapped onto a current situation, and neuroimaging studies have emphasised the importance of inhibiting irrelevant features and semantic associations when engaged in relational reasoning.

Learning and the design of instruction

Despite the fact that transfer of knowledge is often constrained by context (it is often “inert”), we still manage to learn. Indeed, analogies are often used as teaching devices (e.g., Harrison & Coll, 2008; Mayer, 1993; Niebert et al., 2012; Vendetti, Matlen, Richland, & Bunge, 2015). Using analogies that are either given or in the form of textbook examples leads eventually to the abstraction of the common features between them. One learns to recognise the problem type and to access the relevant solution method, and the eventual representation is usually characterised as a problem schema. Extended practice over many years leads in turn to expertise in a field. To be of any use in a variety of situations, problem schemas have to be general enough to apply to a range of situations and detailed or concrete enough to be used to solve a specific example. E = mc2 doesn’t really tell you how to solve a given problem. Equations and general principles are often too abstract to help the learner. Schema representations formed from experience have to be at a moderately abstract level (Zeitz, 1997). There are various models of how we generalise from experience. Although inductive generalisation is a very important mechanism, specialisation is also important. We need to learn the exceptions to the rules as well as the rules themselves (“i” comes before “e” except after “c” and except after a few other letters that don’t conform to the rule; emus have feathers but can’t fly). The development of expertise includes the learning of schemas that cover exceptions as well as the generality of cases. For this reason, experts’ representations can be flexible and they can get over the effects of automaticity when the situation demands it.

By identifying the processes by which we learn and the nature of the capacity limits that impact on these processes, we can design instructional materials they take these processes and limits into account. This is what Sweller’s cognitive load theory attempts to do in parcelling out the nature of teaching materials into intrinsic, germane and extraneous load. This is also what Mayer’s principles of multimedia learning are attempting to do by examining the relative influences of narration, imagery, animation, text and so on, in order that instructional material integrates these various media so that they conform to the human cognitive system.

The brain

The various methods of examining the operation of the brain when we perform some cognitive task should, ideally, allow us to see how the anatomy of the brain gives rise to cognition. For example, the primary locus of reasoning and problem solving, learning, creativity and decision making is the prefrontal cortex (PFC). According to Collins and Koechlin (2012), the PFC is capable of monitoring three or four behavioural strategies concurrently. It controls interference from potentially distracting information while integrating multiple relational representations in analogical reasoning (Cho et al., 2010). Indeed, an important finding from neurological studies is the role played by processes that control interference from distracting information. In the past this process has not featured in cognitive models of problem solving and reasoning. However, attempting to assign cognitive functions to specific brain areas can often be problematic. Working memory has been seen as being located in various modules of the PFC, but Postle (2006) has argued that there are too many dissociable aspects of working memory (including spatial aspects and visual features of a scene, visual processing of manipulable and non-manipulable objects, processing phonology, syntax and semantics and so on) that assigning it to small number of brain areas in the PFC is not feasible. There are too many features of working memory that are dispersed to an extent throughout the brain, thus one of the useful outcomes of trying to locate cognitive functions in the brain is that it forces us to reconsider some of our theories of cognition.

Although this book has concentrated mostly on the general cognitive processes involved in problem solving and learning, there are many other variables that affect whether an individual successfully solves a problem, achieves her goal, learns a new domain, becomes well versed in it, or manages to achieve a level that could be called exceptional performance. As Chapter 6 indicated, there are many factors that affect how we solve problems and eventually develop expertise. Charness, Krampe and Mayr (1996) and Charness et al. (2005) have described a taxonomy of factors that are important in (chess) skill acquisition and the development of exceptional performance. External social factors (e.g., parental, cultural, financial support), internal factors (e.g., motivation, competitiveness) and external informational factors (the nature of the domain and the sources of information about it) all affect the amount and quality of the practice of person puts in. That in turn interacts with the cognitive system which includes “software” (knowledge base and problem solving processes) and hardware (e.g., working memory capacity, processing speed). For some domains different factors are likely to be emphasised over others. For example, skilled chess performance may rely more on the cognitive system than on the external factors, although the latter are not negligible.

Figure 10.1 includes certain areas that have not been covered in this book. Individual performance in any task is influenced by a whole host of cultural, social and contextual factors interacting with usually stable motivational, personality and physical factors. These interact with inherent differences in knowledge that change over time and between individuals, and differences in cognitive processes that remain relatively stable over time (within limits). A specific problem is embedded in some immediate context which may have certain demand characteristics. Solvers may ask themselves, “Why am I being asked to do this experiment? Is this perhaps a memory task?” Alternatively, the problem may be something like a car breaking down the middle of nowhere during a thunderstorm, where physical factors such as the temperature may affect the nature of the problem solving that takes place.

The social setting can be very important. The very presence of other people can affect processing speed (Zajonc, 1965, 1980). The cultural setting can affect how one regards a problem or even whether a situation is a problem at all. One culture may spend time over the problem of how many angels can dance on the head of a pin or over whether women have souls. Another culture may not regard these as problems worth considering in the first place. Context, social setting, an individual’s nervous system and personality factors can together influence performance. During an exam, performance on an essay question can be entirely different from performance in the same question while sitting at home by the fire.

This book, however, has concentrated on the interaction between a problem in its context and cognitive system, and tried to show how human beings in general (and occasionally some other forms of information processing system) attempt to solve problems. Other areas outlined in Figure 10.1 would need to be addressed if we want fully to understand how any given individual confronts a particular type of problem. They also need to be taken into account if we are to understand individual differences in how people faced with a problem that they are at first unable to solve become (or fail to become) world-class experts.

References

Anderson, R. C., & Pichert, J. W. (1978). Recall of previously unrecallable information following a shift in perspective. Journal of Verbal Learning & Verbal Behavior, 17(1), 1–12. doi:10.1016/S0022-5371(78)90485-1

Charness, N., Krampe, R., & Mayr, U. (1996). The role of practice and coaching in entrepreneurial skill domains: An international comparison of life-span chess skill acquisition. In K. A. Ericsson (Ed.), The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports and Games, pp. 51-80. Mahwah, NJ: Erlbaum.

Charness, N., Tuffiash, M., Krampe, R., Reingold, E., & Vasyukova, E. (2005). The role of deliberate practice in chess expertise. Applied Cognitive Psychology, 19(2), 151–165.

Chi, M.T.H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121–152.

Cho, S., Moody, T. D., Fernandino, L., Mumford, J. A., Poldrack, R. A., Cannon, T. D., … Holyoak, K. J. (2010). Common and dissociable prefrontal loci associated with component mechanisms of analogical reasoning. Cerebral Cortex, 20(3), 524–533. doi:10.1093/cercor/bhp121

Collins, A., & Koechlin, E. (2012). Reasoning, learning, and creativity: Frontal lobe function and human decision-making. PLOS Biology, 10. Retrieved from doi:10.1371/journal.pbio.1001293

Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial Intelligence, 41, 1–63.

Gentner, D., Anggoro, F. K., & Klibanoff, R. S. (2011). Structure mapping and relational language support children’s learning of relational categories. Child Development, 82(4), 1173–1188. doi:10.1111/j.1467-8624.2011.01599.x

Gentner, D., & Gentner, D. R. (1983). Flowing waters or teeming crowds: Mental models of electricity. In D. Gentner & A. L. Stevens (Eds.), Mental Models (pp. 51–80). Hillsdale, NJ: Lawrence Erlbaum Associates.

Harrison, A. G., & Coll, R. K. (2008). Using Analogies in Middle and Secondary Science Classrooms: The FAR Guide – An Interesting Way to Teach With Analogies. Thousand Oaks, CA, US: Corwin Press.

Hayes, J. R., & Simon, H. A. (1974). Understanding Written Problem Instructions. Erlbaum: Hillsdale, NJ.

Hinton, P. (2000). Stereotypes, Cognition and Culture. London: Psychology Press.

Holland, J.H., Holyoak, K.J., Nisbett, R.E., & Thagard, P. (1986). Induction: Processes of Inference, Learning and Discovery. Cambridge, MA: MIT Press.

Hurley, S. M., & Novick, L. R. (2006). Context and structure: The nature of students’ knowledge about three spatial diagram representations. Thinking & Reasoning, 12(3), 281–308. doi:10.1080/13546780500363974

Kahneman, D. (1991). Judgment and decision making: A personal view. Psychological Science, 2(3), 142–145.

Mayer, R. E. (1993). The instructive metaphor: Metaphoric aids to students’ understanding of science. In A. Ortony (Ed.), Metaphor and Thought (2nd ed., pp. 561–578). Cambridge, MA: Cambridge University Press.

Müller, B. (1999). Use specificity of cognitive skills: Evidence for production rules? Journal of Experimental Psychology: Learning, Memory, and Cognition, 25(1), 191–207. doi:10.1037/0278-7393.25.1.191

Newell, A., & Simon, H. A. (1972). Human Problem Solving. Upper Saddle River, NJ: Prentice Hall.

Niebert, K., Marsch, S., & Treagust, D. F. (2012). Understanding needs embodiment: A theory-guided reanalysis of the role of metaphors and analogies in understanding science. Science Education, 96(5), 849–877. doi:10.1002/sce.21026

Novick, L. R. (1990). Representational transfer in problem solving. Psychological Science, 1(2), 128–132. doi:10.1111/j.1467-9280.1990.tb00081.x

Novick, L. R., & Hmelo, C. E. (1994). Transferring symbolic representations across nonisomorphic problems. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(6), 1296–1321.

Novick, L. R., & Holyoak, K. J. (1991). Mathematical problem solving by analogy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(3), 398–415. doi:10.1037/0278-7393.17.3.398

Pennington, N., & Rehder, B. (1996). Looking for transfer and interference. In D. L. Medin (Ed.), The Psychology of Learning and Motivation (Vol. 33). New York: Academic Press.

Postle, B. R. (2006). Working memory as an emergent property of the mind and brain. Neuroscience, 139(1), 23–38. doi:10.1016/j.neuroscience.2005.06.005

Reed, S. K., & Ettinger, M. (1987). Usefulness of tables for solving word problems. Cognition and Instruction, 4(1), 43–58.

Robertson, S. I. (2000). Imitative problem solving: Why transfer of learning often fails to occur. Instructional Science, 28(4), 263–289.

Schön, D. A. (1993). Generative metaphor: A perspective on problem-setting is social policy. In A. Ortony (Ed.), Metaphor and Thought (pp. 137–163). Cambridge, MA: Cambridge University Press.

Simon, H. A., & Hayes, J. R. (1976). The understanding process: Problem isomorphs. Cognitive Psychology, 8, 165–190. doi:10.1016/0010-0285(76)90022-0

Tzuriel, D. (2007). Transfer effects of teaching conceptual versus perceptual analogies. Journal of Cognitive Education and Psychology, 6(2), 194–217.

VanLehn, K. (1989). Problem solving and cognitive skill acquisition. In M. I. Posner (Ed.), Foundations of Cognitive Science (pp. 527–579). Cambridge, MA: MIT Press.

Vendetti, M. S., Matlen, B. J., Richland, L. E., & Bunge, S. A. (2015). Analogical reasoning in the classroom: Insights from cognitive science. Mind, Brain, and Education, 9(2), 100–106. doi:10.1111/mbe.12080

Zajonc, R. B. (1965). Social facilitation. Science, 149, 269–274.

Zajonc, R. B. (1980). Compliance. In P. B. Paulus (Ed.), Psychology of Group Influence (pp. 35–60). Hillsdale, NJ: Erlbaum.

Zeitz, C. M. (1997). Some concrete advantages of abstraction: How experts’ representations facilitate reasoning. In P. J. Feltovich, K. M. Ford, & R. R. Hoffman (Eds.), Expertise in Context: Human and Machine (pp. 43–65). Menlo Park, CA; Cambridge, MA: American Association for Artificial Intelligence; MIT Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.64.89