Promoting Transfer through Case-Based Reasoning: Rituals and Practices in Learning by Design™ Classrooms

Janet L. Kolodner, Jacquelyn T. Gray, and Barbara Burks Fasse

Georgia Institute of Technology
Hands-on project experiences are becoming common in education, but often, while students do a fine job of completing a project, they don't learn as much from these projects as they could or should. Learning by Design™ (LBD™), based on the cognitive model implied by case-based reasoning (CBR), a model of learning from experience, provides guidelines for orchestrating and facilitating hands-on activities in classrooms in ways that promote transfer. Drawing on CBR, we've identified many of the affordances and potential affordances for transfer that project and problem-solving activities provide, and we've designed classroom rituals and practices that help teachers and students identify those affordances and act on them. Of most interest are our gallery walks and pin-up sessions and our rules-of-thumb tables. We've implemented our approach in several Learning-by-Design curriculum units. We show how the computational models of cognition that come out of case-based reasoning can help us redefine transfer as spontaneous reminding and use of previous experience in reasoning about a new situation, and we show how they suggest practices for the classroom that are consistent with what studies of human cognition tell us about promoting transfer. We then present Learning by Design’s rituals and practices. Several interesting examples of spontaneous transfer are documented from our observational data and quantitative results are presented that show that LBD students, who are exposed to an environment that is orchestrated to promote transfer, have greater facility in recognizing the appropriateness of and applying science skills and practices they’ve learned. We propose that learning environments that encourage the natural use of case-based reasoning will increase our understanding of transfer.

Keywords: Transfer, Case-Based Reasoning, Learning, Learning by Design


A student was asked to explain her reason for her discussion of the extra variable in her science fair project. "Oh, that’s called fair testing. See, when you do an experiment, you have to have fair testing. And it is not fair testing if you are not using the right variable or if, like, there is some other variable and you don’t report it and deal with it." The researcher asked her why she didn’t just leave the variable out of the report once she recognized it as a mistake. She was incredulous at the suggestion. Clicking her tongue and rolling her eyes in disgust, she replied, "Because it wouldn’t be fair testing and would be not very good science, well, it would be bad science. It would be wrong." (Fasse, field notes, fall, 1999)

A group of boys who had not had a chance to participate in a presentation of their ideas to the class (a pin-up session) nonetheless prepared a plan and discussed it thoroughly before moving forward to construct their best parachute. As they would have done had they participated with the class in a pin-up session, they drew a chart of their ideas, taped it to the wall, and stood around it discussing with each other the pros and cons of constructing their parachute a certain way. Our researcher asked the teacher what role she had played in the boys’ decision to utilize the pin-up. The teacher responded, "Oh, they did that on their own. I didn’t assign them to work on anything. … They did the pin-up because they know that is part of the design process…. Now, it wasn’t pretty or anything, but they did the sketches of their ideas." (Fasse, field notes, fall, 1999)

Several months into the school year, students spontaneously use the terminology of LBD (i.e., criteria, constraint, collaboration, observations, fair testing, trials, etc.) when they talk among themselves or when an adult asks for explanation. Students argue over whether a particular plan for a design meets the "criteria." When asked to explain why they are doing something a certain way, they respond, "Because we have a cost constraint." Early on, students struggle with the concept of collaboration, as it seems to them to be either stealing someone else’s ideas or relinquishing ownership of their own ideas. It doesn’t take long before they are willing sharers of ideas–making suggestions, showing what they have, asking for help, crediting the origin of an idea gleaned from others. As the students make the language more and more their own, they become quite creative in its application. Students caught gossiping or engaging in off-task visiting smile slyly at the teacher and claim to be "just collaborating, Mrs. Z" followed by gales of laughter. (Fasse, field notes, spring, 2000)

The teacher, stopping by to answer a question from an individual group, recognized a teaching opportunity. Addressing the whole class without interrupting the students at work constructing their cars, he repeated the student’s question to the class, called for other examples, then gave an explanation of Newton’s second law and the properties of friction. A group of girls, the self-named Pink Ladies, listened intently, then returned to tightening bolts and rolling their car to test their work. Suddenly one of the girls experimenting with the car stopped and shouted, "Ohhhhhh, now I get it. Now I know why my mom’s car skids in the rain!" Using the car to demonstrate, she went on to explain her epiphany about friction to the others in her group. (Fasse, field notes, spring, 1998)

These examples, taken from middle-school classrooms we’ve been observing, speak directly to the abilities of middle school students to reason scientifically and take on the practices of science. They show us students acting as intelligent novices — using previous experience and knowledge to reason about a new problem; working with their peers to co-construct new knowledge and problem solutions; using evidence to justify design decisions; taking on the practices and vocabulary they’ve been exposed to -- in short, spontaneously exhibiting transfer.

The students and teachers in the examples above are part of an experiment in developing an approach to science education based on the lessons suggested by case-based reasoning (Kolodner, 1993; Schank, 1982, 1999). Case-based reasoning (CBR) was developed as a way of enhancing the reasoning capabilities of computers; it is a kind of analogical reasoning in which problems are solved by reference to previously-experienced situations and the lessons learned from them. Experiences implementing computer systems that could reason and learn based on their experience have allowed the case-based reasoning community to extract principles about learning from experience — e.g., the kinds of interpretations of experience that are important to reach a reusable encoding of an experience, the kinds of interpretations of experience that promote accessibility, and triggers for generating learning, explanation goals, and revising previously-made encodings.

Our experimental approach to science education is called Learning by Design™ (Kolodner et al., 1998; Hmelo et al., 2000). Based on case-based reasoning’s principles, it has middle-school students (ages 12 to 14; grades 6 to 8) learning science in the context of attempting to achieve design challenges. In trying to design and build a working device, students identify what they need to learn, engage in investigative activities to learn those things they’ve identified, and apply what they are learning to achieve their design goal. Significant attention is given to several activities as they are engaging in the design challenge: reflection on and articulation of the practices being engaged in (both before and after engaging in the practices), sharing experiences with each other, justifying ideas with evidence, explaining results, and iteration towards successful solutions. In iterating toward solutions that work well, students have a chance, as well, to debug their conceptions and identify holes in what they know.

In our set of physical science curriculum units, students learn about forces and motion by designing and building a miniature vehicle and its propulsion system such that it can navigate several hills and travel straight and far beyond the hills. They learn about mechanical advantage by designing and building a device to lift a heavy object using a very small force. They learn about energy and work by designing and constructing a model of a subway system, choosing its stops and up and down trajectory in ways that minimize the energy needed for it to run. In our set of earth science units, students learn about erosion and the earth’s surface processes by designing and constructing a model erosion management system aimed at keeping a hill from eroding onto the basketball court below. Learning by Design aims to help middle-school students learn science content and science, design, communication, and collaboration skills and practices, and to learn them in ways that afford their use in everyday reasoning.

The kinds of skills we see exhibited in the above examples -- scientific reasoning as well as planfulness, communication skills, and independent learning -- develop over time. They can be difficult to measure (Zimmerman, 2000) because the ability to transfer doesn’t happen instantaneously; rather, the building blocks for being able to participate in practices in an expert way are developed over time and in stages (Bereiter & Scardamalia, 1993). Development of skills and practices requires experiences over time and reflection on those experiences (Collins, Brown, & Newman, 1989; Bransford & Stein, 1993; Dagher, 1998; Zimmerman, 2000).

We propose that learning environments that encourage the natural use of case-based reasoning (Kolodner, 1993; Hammond, 1989, Schank, 1982, 1999) to achieve challenges of real-world complexity and that are orchestrated in ways that promote repeated practice, promote articulation of the skills and practices being used, and explicitly encourage reuse of lessons learned from old experiences, will promote transfer — both of content and skills. Learning by Design (LBD) provides an example of how to engineer such a classroom. Our summative data shows transfer among our students working in groups, and we have collected observational data in our classrooms that show students participating in ways that suggest the development of transfer capabilities, especially in their learning of science, design, and collaboration skills. We present examples from several LBD classrooms that show the gradual development of the ability to participate in science, design, collaboration, and communication practices and several stages in the developing ability of learners to use the skills they are learning in new situations. We will try to describe the conditions under which one can expect such skill development to happen, using LBD as our model. We follow Bransford and others (see Bransford, Brown & Cocking., 1999, Greeno, 1997) in defining transfer at both a cognitive and cultural level. We further agree with Kuhn (1997) that development of skills should be examined both within individuals and within the context of groups of individuals engaging during collaborative problem solving activities. We propose that investigation of the learning that happens within environments that have the qualities of Learning by Design can further increase our understanding of transfer.

We continue with a discussion about transfer -- its definition and the processes involved in using knowledge and skills in new situations. We then discuss what case-based reasoning, as a model of cognition, suggests about the cognitive processes involved in transfer, and then we present Learning by Design as a way of achieving what the transfer literature proposes through case-based reasoning. We provide several examples of skill development in the context of an LBD classroom and show data suggesting that LBD students are indeed better able to transfer than students in non-LBD science classes. We end with discussion of what LBD says about promoting transfer and some of the issues that must still be addressed.

What is Transfer?

Cognitive approaches to learning define transfer as the ability to reuse knowledge or skills learned in one context in another context they were not learned in (see, e.g., Anderson, Reder, & Simon, 1996, Salomon & Perkins, 1989). Near transfer refers to reuse in a context that is only slightly changed. If I learn to measure water in the context of making bread, and then I measure water again (the same way) when making a cake, I am exhibiting near transfer. If I use the same procedure in the science lab when asked to measure a liquid in a beaker, I am exhibiting farther transfer — using a skill learned in one context in a quite different one. If I use the same procedure to measure liquid in a pipette, I am exhibiting slightly farther transfer. But, in all, I am using the skill in a context it was intended for — measuring a liquid.

Far transfer, on the other hand, means making a kind of analogical leap between two usually-separate contexts. For example, during the 2000 Olympics games, we learned that one of the swimmers’ bathing suits had been designed with a surface similar to sharks’ skin. Sharks are quite aerodynamic, based partially on their shape and partially on the texture of their skin. Designing an aerodynamic swimsuit based on the texture of sharks’ skin is an example of very far transfer. Similarly, in 1954, the McDonnell Corporation designed their Voodoo F101A airplane in a shape similar to a Ganges shark — another example of very far transfer.

There are many different gradations of transfer in between very near and very far that we engage in and see around us everyday. The student above who applied principles about managing variables in her science fair project was engaging in farther transfer than when she simply carried out those practices as expected in the classroom. Applying the practices to her science fair project required recognizing their applicability in a novel context, one that was not orchestrated specifically in a way that would encourage such recognition. Recognizing the applicability of a pin-up when the situation was quite different from those in which a pin-up is usually carried out is quite sophisticated transfer as well. Explaining the car’s skidding is a use of knowledge of Newton’s laws that goes beyond their application to the miniature vehicles and their propulsion systems that the students were applying their knowledge to. None of these examples is as sophisticated as the shark examples, but all show reuse of knowledge or skills in a situation they were not directly targeted for.

According to the socio-historical and socio-cognitive literatures (Vygotsky, 1978, Greeno, 1997, Collins, Brown & Newman, 1989), learning how to use knowledge and skills well requires carrying them out within the "cultural" context in which they are normally used so as to be able to recognize their applicability through the affordances of the environment and so as to be able to carry them out in ways that are valued by the social community. One sees that someone has learned, they say, when they are able to participate in the practices of a community, and one should be aiming for learners to be able to carry out skills and use knowledge as they participate with others and in ways that the community values.

The science education community and the recently published American standards about science literacy aim for this kind of learning (American Association for the Advancement of Science (AAAS), 1993). They want students to learn science concepts and skills in ways that allow them to apply those concepts to new situations as they arise and to learn those skills by engaging in the practices of scientists (Dagher, 1998; Mintzes, Wandersee & Novak, 1998; Zimmerman, 2000). The aim is for students to be able to enact science process skills such as inquiry, observation, measurement, experiment design, modeling, informed decision making, and communication of results, as well as social practices such as learning from each other, and to learn each in ways that will allow them to take part in science practices in skilled ways both inside and outside the classroom.

Our Learning by Design approach takes on these same objectives. For example, we want students who are learning about forces and motion to engage with others to be able to explain why one skids on the ice, predict whether a particular surface can be walked across without falling, and come up with a procedure for figuring out why their vehicle isn’t going as far as one of their classmate's — all examples of the application of knowledge learned in science class. We also want them to able to figure out what steps to carry out to find out which of several frisbees is more aerodynamic or which dishwashing liquid is more economical; to recognize that a politician’s reasoning is unclear (e.g., he gave only opinions with no evidence to back anything up); and to be able to communicate all of that in a way that allows others to engage in discussion where all parties learn — all skills used by scientists. We want them to use their skills and knowledge to engage with others to do each of these things and eventually to develop their skills and knowledge so that they can do each as individuals.

Transfer, then, in the context of science learning, means gaining expertise in the skills scientists engage in and learning scientific concepts and their conditions of applicability in order to deploy both in scientific reasoning. Learning by Design aims for this by helping students learn in the context of carrying out practices that are valued by the science community, using relevant science knowledge and skills as they carry out those practices, and reflecting on those experiences so as to extract what can be learned about both content and the full range of practiced skills. By helping students experience concepts and practice skills in a variety of situations, we aim to help them learn concepts and skills in ways that allow for a broad range of flexibility in their application. Indeed, we have developed a series of rituals within our approach that serve to make the use of science skills authentic by aligning them with the actual practices of scientists and designers.


Processes involved in transfer

Transfer requires successfully carrying out three processes: (1) recalling (identifying) something relevant from memory, (2) deciding on its applicability, and (3) applying what has been recalled. According to the principle of encoding specificity (Tulving & Thomson, 1973), the more similar one’s interpretation of a new situation is to the way an old item was encoded at the time it was placed in memory, the more chance of retrieving that old memory item (no matter whether it is in the form of a rule, a scheme, or a situation description (case)). One is thus more likely to remember something if conditions surrounding it are similar to something in a new situation.

But not everything that comes to mind is applicable. Thus, one must judge the applicability of what one remembers. This judgment depends partially on the content of an old encoding — it is easier to judge applicability to the extent that an encoding specifies applicability conditions. From this we can learn some of the kinds of content that, if encoded in the representation of an experience, make it more productively usable. In particular, encodings of the causality behind decisions that were made, the reasons why a skill was used or deemed applicable, and explanations of why something worked or didn’t are important. This, in turn, suggests a need for a reasoner to receive feedback on his/her decisions — to know what happened as a result of their application.

The cognitive science literature tells us about how rules and schemas are applied (Anderson, 1983; Singley & Anderson, 1989, Newell, 1990); application of an old experience or generalized experience to a new situation can be explained by procedures from case-based reasoning (Kolodner, 1993; Schank, 1982, 1999). All approaches agree that it is easier to apply a more "contextualized" piece of knowledge, i.e., one whose applicability is more specific to the situation being encountered, than something more "abstract"; we may know an abstract piece of knowledge (e.g., Newton’s Second Law) is applicable without knowing exactly how to apply it. Achieving transfer requires encoding that makes clear how some skill was used or how some concept was applied.

Consider, for example, the girl in the example above (an 8th grader; 14 years old) who identified while working on her science fair project that she had forgotten to control an important variable. She ran her experiment and got inconsistent results across her trials. She remembered that when students had gotten inconsistent results in running their experiments in class, it meant they had forgotten to control for some variable. This suggested that maybe she had forgotten to control for some variable. But was that explanation applicable in her situation? She judged that it was (or, perhaps, guessed that it was) because in both an experiment was being run where consistent results were expected and in both the experimenters had been careful to run the procedure exactly the same way each time. Applying that explanation to her situation, she discovered a variable that she had not controlled.

Influences on Ability to Transfer

A recent book by Bransford, Brown, and Cocking (1999) sums up what we know about influencing transfer. They begin by telling us that transfer can only be expected if someone understands a concept or strategy well; thus, memorization of facts cannot be expected to promote transfer, but rather a focus on learning with understanding is needed. Learning with understanding means understanding why something has the properties it does, how it behaves in different kinds of conditions — in general, organizing one’s knowledge of a subject in terms of how structures and functions are related to each other. Such deep understanding requires considerable time and "deliberate practice" (Ericsson et al., 1993), with frequent feedback allowing learners to assess the depth of their understanding and opportunities to see the potential transfer implications of what they are learning (Anderson et al., 1996; Klahr & Carver, 1988).

They emphasize that transfer is affected by the contexts in which learning happens and by the internal representations of a problem, its domain, and its solution that learners construct. Important as students are solving problems or engaging in deliberative practice, is that they get help in representing the problems they are addressing at levels of abstraction that are high enough to cover a range of problems (Holyoak & Thagard, 1995). Students who learn only specific tasks without understanding the principles underlying them can do other similar specific tasks well, but they can’t apply what they’ve learned in new situations (Singley & Anderson, 1989). On the other hand, students who both experience the concreteness of particular problems and learn the abstractions and principles behind those problems and their solutions can transfer what they’ve learned to a wider range of analogous problems. Transfer thus requires that concepts and strategies being learned be experienced across multiple contexts so that learners can extract relevant characteristics across contexts and learn a range of applicability conditions (Gick & Holyoak, 1983; Cognition and Technology group at Vanderbilt, 1997; Kolodner, 1997).

The combination of working on a variety of concrete problems and learning the abstractions and principles behind each allows learners to think more flexibly (Bransford et al., 1998; Spiro et al, 1991). This is because ability to transfer depends on the ability to recall one’s knowledge or experiences. One can recall better to the extent that one’s representations of knowledge in memory share elements with one’s representation of a new situation (e.g., Tulving & Thomson, 1973, Singley & Anderson, 1989; Schank, 1982; Kolodner, 1983). The ability to construct "generally applicable" representations (Hammond, 1989; Kolodner, 1993) and schemata that represent a collection of experiences, develops over time through noticing similarities and differences across diverse events (e.g., Kolodner, 1983; Holyoak, 1984, Redmond, 1992). If we want students to be able to transfer what they are learning in school to everyday life, then it is important to expose students to applicability of what they are learning to some everyday situations and to help them learn in contexts that are similar in critical ways to what they will encounter in the world.

"Transfer", they tell us, "is best viewed as an active, dynamic process rather than a passive end-product of a particular set of learning experiences" (Bransford et al, 1999:41). According to this principle, transfer is not simply a passive event that happens after learning is complete. Rather, it is a dynamic and strategic process that includes choosing strategies, considering resources, judging applicability, and seeking and receiving feedback, and it depends on active memory processing that re-represents and re-organizes what’s been learned over time (e.g., Brown et al., 1983; Singley & Anderson, 1989). Transfer involves a set of steps — remembering something that might be applied, determining its applicability, and application. But before someone is ready to do the whole set of steps by himself or herself, s/he is often able to do the applicability and application part based on somebody else’s prompting (Gick & Holyoak, 1980; Perfetto et al., 1983).

If we help a learner know what knowledge he/she has that might be applied in a new situation, we aid the process of recognizing applicability conditions and provide hints that help the learner abstract more of what is important in representing a problem or domain. Transfer can be improved, as well, by helping students become aware of the reasoning they are doing as they learn (Collins et al., 1989; Palinscar & Brown, 1984; Scardamalia et al., 1984; Schoenfeld, 1983, 1985, 1991).

Finally, they tell us that "All learning involves transfer from previous experiences" (Bransford et al., 1999:56). That is, we are always drawing on what we know to understand new situations and concepts and to solve new problems. We need to help students activate knowledge and skills they already have that is applicable in a new learning situation so that they can build on their existing strengths and get started on the way to integrating the new things they are learning coherently with what they already know.

A View of Transfer through the Lens of Case-Based Reasoning

Case-based reasoning (Kolodner, 1993, 1997; Riesbeck & Schank, 1989; Schank, 1982, 1999) means solving a new problem by adapting an old solution or merging pieces of several old solutions, interpreting a new situation in light of similar situations, or projecting the effects of a new situation by examining the effects of a similar old situation. A case-based reasoner is constantly engaging in transfer -- always applying what it learned in one situation to another it finds itself in. Sometimes it applies what it has learned in ways that lead to success; sometimes application of what it has learned is not as successful as expected. At other times, it is surprised by what happens because an old experience predicted something different. When a case-based reasoner is surprised or encounters failure, it attempts to explain what went wrong in its reasoning and to identify what else it needs to learn. A case-based reasoner extends its knowledge (learns) by incorporating new experiences into memory in ways that are consistent with what is already in memory, re-encoding an old experience based on its application and feedback about what happened to more accurately reflect what one can learn from it and its applicability, and abstracting out generalizations across experiences.

The computational models of cognition that come out of case-based reasoning can help us redefine transfer as spontaneous reminding and use of previous experience in reasoning about a new situation. Each of the case-based reasoners we’ve implemented on the computer have memories holding their previous problem-solving experiences. When faced with a new problem to solve, they search that memory of experiences (cases) to find one or a few that are most usefully similar to the new situation and its goals, and then they use the cases they find to warn of problems that might arise, to suggest solutions, to suggest means of getting to solutions, and to help with analysis of solutions in progress. For this to work, of course, their algorithms must encode cases in ways that make them easily accessible and with content that allows their applicability to be ascertained and their application to a new situation to be easy.

Experiences with making automated case-based reasoners work suggest a number of lessons that we can learn about promoting deep understanding, learning, and transfer — among them the role of feedback, explanation, and iteration; the kinds of interpretations of experiences that lead to efficient access; the kinds of interpretations of experiences that lead to effective reuse and application of what has been learned; and the variety of experiences needed to learn applicability. We can summarize what case-based reasoning has to say about transfer and its development in seven statements, adapted from Kolodner (1997) and Kolodner (1993). With each, we present the practices it suggests for the classroom. All are consistent with what the studies of human cognition tell us about promoting transfer. (For a more detailed description of CBR, see, e.g., Riesbeck & Schank, 1989, Kolodner, 1993, 1997; Schank, 1999.)

1. A case-based reasoner learns by acquiring cases and encoding them actively. A case-based reasoner that intentionally interprets its experiences to extract the lessons they teach and to anticipate the kinds of situations in which those lessons might be useful encodes its experiences in memory in ways that make them accessible in a variety of situations. The more effort a reasoner puts into this intentional interpretation, the better able it will be to recall and apply its experiences later on. Such interpretation is crucial to deep and transferable learning.

This suggests some basic rules for the classroom: (1) we should be clear about what we want students to be learning and then help them engage in the kinds of activities that have affordances for achieving those objectives, (2) we should help them interpret their experiences in ways that allow them to extract lessons learned, and (3) we should help them anticipate the conditions under which what they are learning might be applicable. In other words, we should help them both to have useful experiences and to turn those experiences into well-interpreted and well-encoded cases in their memories.

2. Failure is critical to learning. It is a good motivator of explanation; when one fails to achieve what one is intending or when one is surprised, one is motivated to explain (learning something new in the process) and to apply what one has learned to do better. Failure, then, has the affordance of focusing a reasoner on what it needs to learn. But one can recognize failure only if one has goals one is trying to achieve and expectations about how one’s actions will effect those goals.

Therefore, it is critical that students be put in situations where they are attempting to achieve meaningful goals, that they have a need to apply what they are learning to achieve those goals, that they have the opportunity to get meaningful feedback that will allow them to recognize failures, and that they be asked to predict what will happen before they try out their ideas. Trying out what they’ve learned and getting interpretable feedback will then focus the learner on what else needs to be learned. Their learning experiences should be aimed toward achieving an achievable goal, one where it is possible to get better and better solutions as they learn more.

3. A reasoner that is connected to the world will be able to judge the goodness of its predictions, recognize its successes and failures, and evaluate its solutions with respect to what results from them, allowing encoding that discriminates usability of what it is learning and allowing good judgments about later reuse.

We must make sure, therefore, that learners have a chance to apply what they are learning in ways that provide real and interpretable and timely feedback — the kind of feedback that allows them to experience the effects of what they’ve done. We need to choose the kinds of learning experiences for learners that afford such feedback.

4. Failure at applying an old case in a new situation triggers a need for explanation that might result in reinterpreting old situations and/or discovering new kinds of interpretations. But one cannot always fully interpret one’s experiences well and extract out the lessons learned and their applicability at the time they are experienced — because one is missing information about the situation or because one doesn’t know enough yet. We can get around that deficiency with an iterative cycle in which a learner has multiple opportunities to revisit old experiences, attempting to apply them in a variety of situations, and each time, refining its interpretations of them based on new explanations it can derive. As more is learned, better interpretations can be encoded in memory.

This corollary of the second principle suggests that it is critical, in the classroom, for students to engage in an iterative cycle of applying what they have begun to learn in new situations, failing (gently) in their application, explaining the failures, re-interpreting, and repeating the process incrementally. We can help them maximize what they can learn in each iteration by helping them notice failures, explain, and re-interpret, as we don’t want their lack of interpretive skills to keep them from learning from their experiences. We should make sure that classroom activities are orchestrated so that students periodically spiral back to old concepts and skills after considerably more is learned so that encodings and interpretations of concepts and experiences can be revised and updated. Learning experiences need to be engaging enough to hold their interest through several iterations.

5. The better the analysis of an experience, the better the interpretation and encoding, the better the later access, the better the learning.

This principle, a corollary of the first one, exhorts us to make sure that students are reflecting deeply on their experiences and interpreting them well. Beginners can’t do that well; we need to help them know what to notice. Because the interpretations they need to do (e.g., explanation, anticipation of usefulness) are highly knowledge-dependant, we need to help students extract from their experiences the essential themes and concepts of whatever is the domain and set of practices they are learning.

6. A single experience may hold affordances for learning many different things (a physical law, how to communicate something to others, planning, cooperation, a construction practice, and so on). Learners will learn those that they focus on.

This corollary of the first principle suggests that we need to help learners focus on what’s important in a situation and help them keep an eye on the full range of things that can be learned from their experiences so that they can use their experiences wisely for learning.

7. Fluidity in carrying out the processes involved in case-based reasoning allows for more productive reasoning.

Just as we need to help learners interpret their experiences well, we may also need to help them strategically recall what they know at the right times, judge applicability of what they’ve recalled, and apply what they’ve recalled, all in an attempt to help them learn to do this reasoning well by themselves. If we want students to do a good job of re-using their experiences as they reason, then we need to help them learn the processes involved. This harkens back to one of our central claims — that a reasoner that can engage in the processes involved in using cases to reason will do better at transfer than one that can’t.


From Case-Based Reasoning to Learning by Design™

Learning by Design (LBD) (Kolodner et al., 1998; Hmelo et al., 2000) adopts these suggestions. It begins by making sure students are having the kinds of experiences that can lead to learning targeted learning objectives. It continues by orchestrating classroom activities so that their enactment of those experiences and their interpretations of them lead the students to identify what they are learning from their experiences and to associate good applicability conditions (indexes in CBR parlance) with each thing they learn. It also orchestrates sequences of experiences for students that require access and reuse of what they’ve been learning. It does that in such a way that sometimes their attempts at reuse are successful and sometimes they are not. The subsequent need to explain when reuse doesn’t lead to the success they expect is meant to promote reinterpretation of the conceptions they derived from previous experiences and revision of their not-quite-accurate conceptions.

LBD units orchestrate learners’ classroom experiences to be much like the experiences of our case-based reasoning programs:

The aim is that students for students to later be able to naturally engage in case-based reasoning (i.e., to exhibit transfer) based on their classroom experiences. The examples at the beginning of this paper illustrate what that looks like.


LBD’s Cycles of Activities

Learning by Design enacts a cycle of activities driven by the given design challenge (Figure 1). The need to achieve the design challenge leads learners to identify skills and concepts they need to learn, carry out investigations to learn those things, and then apply what they’ve learned. Application may lead them to identify other things they need to learn, and they investigate again. There is constant movement back and forth, in LBD, between designing and investigation, with the needs of designing driving investigative goals.

The design challenge provides students a need to learn, a reason for remaining engaged, a venue for application and practice, a venue for failing softly and needing to explain, and a venue where reflection on and articulation of what they are learning and iteration towards better understanding are natural. LBD asks students to engage in sequences of activities that move them towards successful achievement of the challenge. In the process, they engage in a variety of science, design, collaboration, and communication practices. They have the opportunity to learn the concepts and skills that are needed for success by identifying a need to learn them, getting experience trying them out, questioning their accuracy, and revising.

For example, to design their best balloon propulsion system, students must understand the ins and outs of combining forces with each other, the way increased force leads to increased distance traveled, and several ways of increasing force. This allows them to understand the forces that are at work as their balloon car increases its speed early on when its engine is emitting air and those at work that cause their vehicle to slow down after the balloons are empty. Only with this understanding can they design a balloon engine that takes best advantage of the materials they have available (balloons of several sizes, straws of several lengths and diameters, cups to insert balloon engines into and hold them in place). Initial experiences with balloon engines afford asking questions about the forces at work; reading and discussion and working through problems afford beginning to understand how forces combine; experimental investigations afford learning the ways changes in the materials they are using effect the forces being exerted and experiencing the effects of different combinations of forces; and construction and testing of balloon car engines affords further experiencing the effects of different combinations of forces and identifying places where their understanding is still incomplete, which suggests additional investigation and application. Pulling what can be learned from all of these experiences together affords deep learning about forces and their effects on motion.

But, as the transfer and CBR literatures tell us, learners will not necessarily notice all of the important characteristics of a situation that afford learning. For this reason, LBD interleaves doing with guided reflection, taking advantage of the iterative and collaborative natures of designing to sequence this interleaving in ways that will seem authentic to students. There are several points in the cycle of iterative design when it seems natural and useful to make presentations to others and/or to discuss ideas, and LBD uses those times as opportunities for promoting reflection. Each serves to help students accomplish achieving their design challenge better in addition to providing opportunities and circumstances for productive reflection.

Engineers and architects, for example, engage with their peers and clients in "design briefs," formal presentations of a set of design plans or in-process design plans. Design briefing sessions provide opportunities for checking that design criteria and constraints are addressed as well as possible and for getting suggestions from others with differing perspectives. We have students do the same in LBD classrooms, presenting their design ideas to each other in "pin-up sessions." They present their design ideas as well as their justifications for each — the combination of science principles (e.g., about how forces combine) and experimental results (e.g., about the effect of the length of a straw on the force a balloon engine exerts) that suggest using certain materials or configuring a device a certain way. Preparing for a pin-up session requires that members of each project group reflect on their goals, how they are aiming to achieve them, and why, in the process connecting the science they are learning to its uses. Whole-class discussions after a pin-up session can be used to help students focus on the principles common across designs or on the special way a science concept is being applied in one design or another. Other kinds of presentation activities include "poster sessions," where students present their investigative procedures and results to each other, and "gallery walks," where they present their designs in progress and their behaviors. Two important kinds of whole-class discussions in addition to the ones that come after these presentation sessions are "whiteboarding," where the whole class keeps track of facts they know, ideas and hypotheses they have, and what they still need to learn, and "rules of thumb generation," where the class works towards identifying trends in the experimental results and tests of designs that can help them with subsequent designing and tries to explain each trend in terms of science principles they are learning.

These reflective opportunities are useful not only for focusing on content being addressed but also for focusing on skills being used and practices being engaged in. Students are expected during pin-up sessions, for example, to justify their design decisions. Students question each other if they don’t understand a peer’s justification, if they don’t believe it, or if they think the evidence points elsewhere. Discussion at the end of the pin-up session can focus on justification and use of evidence in addition to focusing on the science content students used to justify their decisions. Discussion after poster sessions focuses on experimental design in addition to its focus on science content, and discussion after gallery walks focuses on making good scientific explanations. The intention in reflective activities is that they help students focus on what’s most important in their hands-on activities, identify what they are learning, and connect their actions with their goals — what CBR says is important for promoting deep and lasting learning.

Figure 1 shows LBD’s cycles of activities. Tucked within the two essential components of learning from design activities — design/redesign (application) and investigation — are the variety of doing and reflective activities and public presentations explained above. Together, the two specialize a learning cycle, where the steps in the learning cycle are enacted through activities specific to investigating and designing. Design involves iteratively understanding a challenge (and identifying what needs to be learned), planning out one’s best design solution based on what one has learned, then construction, testing, analysis and explanation, and the cycle begins again. Investigation involves clarifying what one needs to learn more about, figuring out how to carry out one’s investigation, conducting the investigation, and analyzing results for what can be learned from them. Investigation is done as a result of a need to know and in the context of knowing how what is being learned will be put to use. Embedded in both of these cycles are venues for public presentation of work in progress and discussion of what’s been done and what’s been learned.

Figure 1: LBD’s Cycles


LBD’s Rituals

Embedded in some steps of LBD’s cycles are practices that help the students engage productively in both doing and reflecting. We call those LBD’s rituals. LBD’s rituals are designed to encourage the kinds of reasoning that are important to reusing one’s experiences; their sequencing is intended to provide an authentic environment for doing that reasoning. Rituals are carried out similarly each time they are enacted, allowing students to focus on the content of what they are doing and thinking about rather than the details of how to engage with the class. Table 1 shows the full range of LBD’s rituals.

LBD’s three kinds of presentation sessions (introduced above) are perhaps its most interesting rituals. While all take the form of each project group presenting to the class and answering questions and taking suggestions, followed by whole-class discussion summarizing and abstracting over the full set of presentations, each is carried out during a different part of the design or investigation cycle, and each focuses on a different set of skills. Each is purposefully deployed at times in the learning sequence that mirror the actual use of these practices by designers and scientists.


Table 1: LBD Rituals and their Purposes

LBD ritual

Placement in cycle


Pin-up session

Design: Present & Share

Presentation of design ideas and design decisions and their justifications for peer review

Gallery walk

Design: Present & Share

Presentation of design experiences and explanation of design’s behavior for peer review

Poster session

Investigate: Present & Share

Present procedures, results, and analysis of investigations for peer review

Messing about

Design: Understand challenge

Exploration (in small groups) of materials or devices to identify phenomena, promote question asking, and see connections between science and the world; followed by whiteboarding


Design: Understand challenge

Investigate: Clarify question

A forum for sharing what peers know, their ideas, and what they need to learn, and to keep track of class’ progress and common knowledge

Gathering examples

Design: Understand challenge

Exploration (as individuals) of the world around them for science in practice so as to see connections between science and the world; followed by sharing of examples, discussion of them, and whiteboarding

Creating and refining design rules of thumb

Design: Analyze & Explain, Present & Share (gallery walk), Understand challenge

Investigate: Analyze results, Present & Share (poster session)

Identify trends in data and behaviors of devices; connect scientific explanations so as to know when the trends apply

"Pin-up sessions," like the professional design briefing, are a forum for sharing design ideas, getting advice from peers, and providing advice to others. Preparing for a pin-up session means planning one’s design and articulating and justifying design ideas well enough so that everyone in a collaborative group agrees and so that ideas can be shared and justified for others. Design planning and subsequent pin-up sessions serve a variety of purposes. First, they engage students in thinking about the implications of what’s been learned during investigations as they attempt to achieve the design challenge. As part of their planning, students go over rules of thumb that have been derived and think about how to apply them (more on rules of thumb later). Second, in asking them to explain their design decisions to others, we ask students to justify decisions with evidence — an important science practice. Third, presentations during pin-up sessions allow classmates to hear several ways that the results of their experiments can be used and to see how others are trying to apply what they’ve learned. They allow peers to experience the thoughts of others in the class, and they force individuals to articulate their reasoning. Finally, the pin-up sessions provides students with other minds that can help them think through their thoughts more clearly. Having to answer questions from peers requires deeper thinking than a student might have done on his/her own.

After planning, students construct their device and test it, collecting data. Almost always, their device doesn’t work as well as they would like the first time. They need to iterate to achieve success. Because they know the reasons for all of their design decisions, construction and testing provide students with a real-world test of their conceptions at the same time they provide a real-world test of their designs. The explanation that they do to figure out why their device didn’t work as well as it should have helps them both to debug their designs and debug their knowledge. During "gallery walks," students present their designs-in-progress to each other, reporting on what happened when they constructed and tested the design they had proposed. This, too, is common in design practice — to get a group of people together to critique the design so far and offer revisions. Students try to explain their results scientifically to their peers and, like in a pin-up session, present the ways they will revise their designs, justifying their design choices. Presentations during "gallery walks" provide opportunities for sharing of insights, seeing how others in the class have implemented similar ideas, and getting help with explanation. Those who are presenting articulate what they tried in their constructions, what happened when they ran it, the reasons and explanations they have of why it worked that way, and what they will do next (and why), or they ask for help if they can’t explain. Gallery walks allow students to share in the experiences of their peers, providing them with more variety in the application of concepts they are learning and ways of engaging in science practices than they can experience personally.

Enactment of rituals in the Investigate and Explore mini-cycle is designed to promote learning of scientific methods — if students know why they are running experiments and need the results of experiments to achieve some goal they have, they might enthusiastically engage in experiment design, in helping each other run experiments well, and in learning the practices of experimentation. "Poster sessions," which come at the end, are much like the poster sessions scientists engage in at conferences. Students report to each other the question they were trying to answer and their hypothesis, the investigative procedure they used, their results, and their analysis and interpretations of their results, including any rules of thumb they might have derived. In the LBD classroom, as in any inquiry classroom, there is generally not time for every student to engage in investigating every question that arises. Thus, different groups in the class each address a different question. Because students are all aiming to address the same challenge, they need each other’s results for their own success, and because they need to trust each other’s results, poster sessions take on the qualities of real-world poster sessions — students question each other about the procedures they used, the consistency of their results, and the basis on which they derive interpretations. The opportunity is there for the teacher to help them identify the kinds of procedures and interpretation strategies that lead to trustworthy and replicable results; discussion of scientific methodology becomes a natural outcome of the poster session.

The rituals called "messing about," "whiteboarding," and "gathering examples," all done while understanding a design challenge, are designed to get students identifying phenomena, wondering, articulating questions, and seeing the connections between the science they are learning and the world around them. "Messing about" (Hawkins, 1974) is a kind of guided play where students compare and contrast the way different devices or materials work and grow curious about what is responsible for their differing behaviors. For example, to encourage students to ask about motion and what effects it (in the context of designing miniature vehicles), we have them explore the behavior of several kinds of toy cars going over hills and moving on several kinds of surfaces. The differences in the capabilities of the cars elicit questions about how to get things going, how to keep them going, why they slow down, and so on. In addition, playing with these familiar objects helps them remember similar experiences from their lives outside of school, and those experiences become available for reasoning in addition to their experiences in the classroom.

"Whiteboarding," taken from Problem-Based Learning (Barrows, 1985; Koschmann et al., 1994), provides a public forum for recording what everyone has seen during messing about, ideas they’ve gotten about addressing the challenge, and the questions they’ve thought of. It allows everyone in the class to benefit from the experiences and curiosity of others. "Gathering examples" happens each time a new phenomena or mechanism (e.g., friction, bearings) is encountered. Students are asked to find four or six examples in the world around them, to draw pictures and explain what’s going on, and to share them with the class. This activity, too, elicits questions, and it, too, is followed by whiteboarding. The class whiteboard is kept visible for the duration of a challenge and returned to each time questions that have been raised are answered and each time the class has engaged in some activity that has resulted in their having new ideas about how to address the challenge. By making whiteboarding part of understanding the challenge and understanding the question being investigated, we aim to insure that the whiteboard is visited and revised before each set of investigative activities, after each poster session, and after each gallery walk (before redesigning).

This use of the whiteboard helps anchor student investigations and activities in a purposeful goal. Achieving the design challenge is the ultimate reason for other activities, and if it is returned to between activities, it provides a way of keeping students anchored in what they are doing. With the design challenge in mind, their investigations, presentations to each other, design iterations, and so on, all have a purpose. It is important, each time something new is learned, to consider how it might apply to the design challenge. The whiteboards help with that, as they are returned to each time significant progress has been made in learning some concepts or when experiences lead to new questions or new ideas. Using the design challenge as an anchor while revising the whiteboard helps focus classroom conversations.

"Creating design rules of thumb" means extracting from one’s experimental results some general principle that others might apply as they are designing. The ritual has the purpose of helping students focus on usefulness and believability of experimental results, both while they are analyzing their own data to extract a rule of thumb and while they are analyzing the quality of their peers’ analysis. The practice of extracting a believable rule of thumb from an investigation provides students a way of summarizing their own results and what they can learn from their investigation and of recognizing how good someone else’s investigation was. If a rule of thumb is believable (because it is backed up by evidence presented to the class), students feel comfortable reusing that peers’ results. If it is not believable (e.g., if it doesn’t match the data well), students engage in critiquing their peers’ experimental designs or procedures, helping them move toward a better design or way of running an experiment, and asking them to redo their investigation. As well, design rules of thumb provide a mechanism to help students move from concrete (their own experience) to abstract (a rule that applies in other situations as well).

Pin-up sessions, gallery walks, poster sessions, and whiteboarding sessions, all public forums, provide the teacher a chance to identify the kinds of help students need and to guide reflective discussions, readings, and further investigations. These public sessions afford the teacher the possibility of focusing discussion in a variety of directions — on science, design, or communication practices students have done well at or need to improve, on LBD practices they’ve done well with or need to improve, and on science and technology concepts. The teacher can ask students to compare and contrast the approaches of different groups in the class, to talk about the reasoning they did and what allowed them to be successful, to gather examples of anything that needs more attention, to write for homework about how they did something, and so on. LBD’s public forums are sequenced so that the teacher can identify areas where attention is needed and guide articulation, investigation, and reflection at times when it is most needed and might be most productive.


LBD in Practice

In the classroom, students work in small groups for all doing and initial reflection activities; they get together as a whole class to share what they’ve done, noticed, and discovered and to uncover trends; return to their small groups to apply what they’ve learned; and work as individuals to prepare for their next small-group work sessions, to write up what they’ve learned, and to write up their investigative and design experiences as reports. Individuals have responsibility for making contributions to their groups, for helping others learn, and for showing the teacher what they’ve learned as individuals.

Projects are graded based primarily on the explanations students can give of how their artifacts work and why they chose to design them the way they did. Individuals are asked to prepare for group work, but they are never asked to turn in assignments for grades until after the group has discussed the work of individuals and made its group decisions, the group’s decisions and explanations have been reviewed by the class, and each individual, for homework, has had the chance to reflect on those experiences to draw out what they can learn from them. This movement from small group to whole-class activities and back, coupled with individual preparation and writing up, moves the whole class along at similar pace while allowing groups that can accomplish more to do more and allowing all students to benefit from each others’ experiences and expertise.

Of course, it takes time before students are expert at carrying out LBD’s rituals, at carrying out essential skills, and at collaborating with each other to enact scientific and project practices. Thus, introduction to the practices of LBD and the skills and practices of science and collaboration follow a developmental cycle. Sequencing is orchestrated so that students always build on extant knowledge and practices to develop sophistication. Science practices and LBD rituals are introduced gradually and in ways that invite students to continually add sophistication to the way they carry them out. Students have their earliest experiences with each of these practices during an introductory unit (Apollo 13) that introduces the practices of science and design, where practices are introduced one or two at a time and students carry them out in situations where they aren’t at the same time trying to learn complicated new content (Holbrook & Kolodner, 2000). Their first gallery walk is in the context of their attempting to design and build a 3-inch-high support for a heavy book out of index cards and paper clips. Their first design of an experimental procedure is done as a full class after everyone runs a procedure their own way and the students see how inconsistent their results are across groups. Their first whiteboarding, along with the notion of a "fair test," comes as they are designing and running procedures to compare the quality of backpacks, sticky tape, tennis shoes, or insulation materials. When skills and practices are introduced, whether they are science skills or LBD rituals, a textbook page explains the practice — very briefly before they carry it out and with more detail once they’ve done it once, to help them understand and debug their practice. Later on, when they encounter them again, they are introduced with a bit more sophistication and expectations are higher. By the time students are designing experiments for the fourth time, for example, the text simply tells them to design and run the experiments they need to run, reminding them to only vary one variable at a time and to keep good records on their design diary pages. Our textbook pages provide the prompting we think students need at each point in their development. The teacher fills in with additional prompting, and classroom discussions after enactment of rituals helps students understand better what they are doing.

The combination of these cycles (Table 2) promotes, as much as possible, experience with conceptions and practices before spending time discussing them, reading about them, or reflecting on them. Students experience Newton’s laws before they see them in print. They experience friction before they discuss its consequences. They engage in a gallery walk before they discuss its ins and outs. Certainly some knowledge of science content and practices is needed before getting started — we try to keep that to a minimum and try to get at it through "messing about" activities, one of whose purposes is to jog students’ memories about what they already know.

Table 2: LBD’s Mini-Cycles and their Enactments





Introduce skills and practices incrementally; work toward iteratively increasing level of skillfulness in carrying each out

Sequence activities so that successive ones require more skills and more skillfulness

Reflect on and discuss skills in successively more detail over time

Do and Reflect

Experience with concepts and LBD rituals before discussing them

Small group work followed by preparation for presentation followed by presentation and whole-class discussions

Zooming in and out

Anchor activities and investigations in a purposeful goal

Return to whiteboard between activities and revise it

Movement between small group to whole-class activities

Move whole class along at similar pace

Allow students to benefit from each others’ experiences

Activities are done in small groups

Reports are to the class

Discussion is as a class

Class agrees on next small-group activities

LBD in practice enacts a set of values and expectations — social conventions that promote learning. Table 3 summarizes. Collaboration is a value in an LBD classroom, not simply a mode of activity. When collaboration is a strong value in a classroom, we’ve observed, even competitions are friendly, with highly-performing teams helping out other teams because they want to help them learn. Performance with understanding and informed decision making are twin values in LBD classrooms, enacted through grading policies and in public presentations and prompted by textbook and design diary pages. Science as a set of processes and science connected to the world around us are two more twin values. Students in LBD classrooms know that science is as much about learning and discovering as it is about knowing facts. And because their learning of science concepts and skills is always with a purpose, they understand the role of science in society. Personal responsibility is enacted through the kinds of homework assignments that are given and expectations about group work. Finally, in our most successful classrooms, teachers set reasonable expectations that must be met, make those expectations clear to students, and maintain decorum and discipline in the classroom (Hmelo et al., 2000; Kolodner et al, 2002). Learning is serious work in these classrooms.

Table 3: LBD’s Values and their Enactments




Whole-group discussion forums; Presentation forums; Small-group work; Peers helping peers; Group design diaries; Giving peers credit for ideas

Performance with understanding

Informed decision making

Grading based on ability to justify and explain, not merely based on performance of designs; Public presentations that include justifications and explanations

Science as a set of processes

Science connected to the world

Science practices carried out with a purpose; Science applied to real-world problems

Personal responsibility

Individuals prepare for small-group work as homework; individuals hand in lab reports and project histories (to complement posters and gallery walks done in groups)

Learning as serious work

A contract: Students are expected to exhibit respect for peers, keep up with work, stay on task, use science, contribute to group work. In return, they get to participate in LBD


Back to transfer

LBD is designed to address each of the guidelines we know for promoting transfer. LBD promotes good encoding in many different ways — asking students to apply concepts and science principles they are learning in several ways, to think about times they’ve encountered those science concepts in the world around them, and to explain how the principles apply in what they are constructing; and asking them to carry out practices in a variety of circumstances, to think about how they carried out the practices and why they did them the way they did, and to see what happens as a result. Students encounter a wide variety of experiences to go with each skill and science principle encountered — their own experiences as well as those of their peers are shared in whole-class discussions, pin-ups, and gallery walks; they are placed in situations where they can fail softly and where they need to explain those failures in order to improve their designs; they get help from each other in explaining, and they get multiple chances to explain as they iterate towards better and better solutions. They experience and explain the application of scientific principles and concepts (e.g., Newton’s Laws, gravity, friction, velocity) as well as important practices (e.g., controlling variables in experiments, measuring, observing, communicating, planning, sketching).

LBD addresses the principles on promoting transfer put forth by Bransford et al. (1999). LBD promotes the deliberate practice needed for deep understanding. Students are asked to apply what they are learning and to explain how it applies, not simply to recite scientific laws and define vocabulary words. LBD promotes contact with the patterns of a domain by giving students a wide variety of experiences — their own and those of their peers; classroom experiences as well as recall of life experiences and examples from the environment. They practice what they are learning as they attempt to make their devices work well. Feedback for assessing their understanding comes in several ways — through trying out their ideas in what they are building and experiencing the effects, by seeing those effects as others try other things, by hearing explanations and advice from their peers, by trying to give their peers advice. They know they are learning when they can follow the advice of their peers, come up with explanations of how their devices work and explain them to their peers, and give their peers advice. LBD’s biggest advantage may be in the way it motivates students and helps them feel accomplished. Our observations and reports from our teachers tell us that approximately 95% of students are on-task all the time (Pennington & Rhew, 2000). And all are capable of achieving LBD’s challenges and feel good about it, though some achieve the challenges better than others.

LBD aims for combining concrete and abstract representations by asking students to work on specific solutions to design challenges, asking them to explain how their own creations work, and asking them to look across what everyone in the class has done to find similarities and differences and explain the differences in how their different devices work. They read pages explaining the abstractions (e.g., gravity, friction, velocity, Newton’s Second Law) at the point where they have a need and want to explain, and they are asked to use the vocabulary of science in making their explanations. They make connections to the world by finding examples of those concepts in the world around them, and considerable time is spent discussing and explaining those examples. The rules of thumb they generate are a first step in moving from the concrete experiences they are having to more abstract conceptions.

The literature tells us to think of transfer as a dynamic process, and that we should think about promoting transfer as an ongoing practice of giving students practice engaging in transfer. LBD’s focus on iterative design has students revisiting the same concepts and practicing the same skills over and over again; its public forums are venues for practicing application of those concepts and skills. Whiteboarding, gallery walks, poster sessions, and pin-up sessions are LBD’s venues for reporting on what one has done and why one has done it and for collecting multiple examples from which to extract abstractions and principles. They provide a venue for coming into contact with a wide variety of situations and for practicing application of what is being learned. LBD’s focus on iterative skill development supports development of transfer as well. Students engage in the practices of LBD and of science over and over, making presentations in public forums that provide some immediate feedback on their ability to engage in those practices. If they’ve explained well with evidence, for example, their peers will accept their arguments better than if they haven’t provided evidence. If they haven’t provided enough evidence for a claim, their peers will ask for more.

These public forums provide, as well, a venue that allows the teacher to gauge how far students have come and what they need for better reuse of concepts and skills, and for teacher and peers to provide help students need to carry out practices and to apply concepts. Teachers and peers model for other students during these sessions, and they prompt students with what they need for success and provide help as needed. Students hear what’s critical from each other and not only from the teacher; they get help with their own reasoning from peers who they are willing to listen to; and those who explain get practice applying concepts and enacting practices. Discussion during these sessions is about science concepts (e.g., identifying the forces) and also about science practices (e.g., managing variables in an experiment) and project practices (e.g., communicating so others can understand, making good sketches, planning, designing). Students might not be able to recall a practice or concept they need; public forums provide opportunities for other students to remind them. A student may have trouble applying some concept or engaging in some practice; public forums provide opportunities for other students to help. The combination of iterative refinement of solutions, repetition of key practices, and presentation of one’s results and thinking to the class provides a variety of opportunities to practice reusing one’s experiences and to be helped to apply one’s knowledge and engage better in practices.

Finally, the literature tells us that all new learning requires transfer from previous experience, and that it is therefore incumbent on educators to help students activate what they already know so that they can apply it, to understand the conceptions students come with and help them re-conceptualize those, and to help students overcome any obstacles to learning that might arise from conflicts between practices they are used to and those that we want them to engage in. LBD accomplishes these in a variety of ways. (i) Messing about is designed to help students activate knowledge they already have and to help them have experiences that they can make reference to later. (ii) Explanation is a key part of each of the public rituals, and one of the purposes in making explanation so central is to provide students a forum for articulating what they "know" that might be faulty and to get them to practice using new knowledge they are gaining. (iii) Finally, because LBD practices conflict with what they are used to from previous science classes, we’ve developed a month-long "launcher" unit (Holbrook & Kolodner, 2000; Kolodner & Holbrook, submitted) to help both students and teachers learn and acclimate to the new practices of the classroom; engagement in the new practices in ways that highlight their usefulness and discussion of the purposes of each are a big part of that unit.


Our hypothesis, stated in the introduction, is that learning environments that encourage the natural use of case-based reasoning (Kolodner, 1993; Hammond, 1989, Schank, 1982, 1999) to achieve challenges of real-world complexity and that are orchestrated in ways that promote repeated practice, promote articulation of the skills and practices being used, and explicitly encourage reuse of lessons learned from old experiences, will promote transfer — both of content and skills. We’ve investigated that hypothesis by examining the transferable learning that occurs among students in LBD classrooms. In particular, we’ve sought to answer two questions in the analyses we’ve done to date: (1) Can we observe transferable learning in the natural course of day-to-day classroom activity? If so, what does it look like? (2) When we compare LBD students, who were in an environment engineered to promote transferable learning, to students who learned in an inquiry environment not orchestrated specifically in that way, do the LBD students exhibit better transfer, and, if so, on which dimensions do they exhibit better transfer?

We’ve used two methodologies for investigating the transfer that develops through LBD practices: data collected in LBD classrooms during a design experiment (Brown, 1992) helps us answer the first question, while analysis of student practices in performance tasks done by LBD and comparison classes during spring of the school year helps us answer the second. Our primary sources for identifying instances of transfer are observations, videos, performance post-tests, homework assignments, and interactions with teachers. By using a broad range of sources, we’ve hoped to cover a wide range of the possible times when transfer might show itself. From those sets of records, we look for four kinds of instances that provide evidence of transfer:

Campione et al. (1995) warn us not to expect that we’ll see transfer in every student and not to expect that it will be the same for all students. Since students are all at different points in their understanding, and since they’ve all had different experiences in their lives, we can expect that what they’ve each stored in their memories are different. Because of that, the encodings each will have in their memories and the instances when each will exhibit interesting transfer will vary greatly. One encodes inertia as what makes it hard to get something started; another encodes it as what makes it hard to stop something in motion; another encodes it as both; another has in her encoding that inertia varies with mass; another encodes that it varies with mass and that it makes things harder or easier to stop, slow down, or get started; and so on. One associates inertia with trying to stop a shopping cart in a supermarket parking lot; another with not being able to slow the car down fast enough and bumping the one in front; another with not being able to get a heavy sled started moving. Each will recognize applicability of the concept of inertia at different times.

Campione et al (1995) warn us as well that we won’t always be there to see transfer as it is happening, and even if we are there, students may not externalize their transfer in a way that will allow us to see it. We may not be there, for example, the day some student trying to remove some of the friction from his car’s wheel-and-axle system recalls the parachute falling slower with more surface area and asks whether that’s the same as friction. It is especially hard for us to find out about spontaneous transfer in explaining or reasoning about something outside the classroom unless a student reports it. Luckily, some of our students are so proud of what they’ve done that they do come in and report it — sometimes when we’re there, sometimes to the teacher who passes it on to us. In one case, a student sent email to a teacher the following school year, telling him about what he had figured out. The teacher passed it on to us.

With all those warnings in mind, we are cautiously excited about the evidence of transfer we’re seeing in our classrooms. Some of our examples of transfer show that indeed some students are reusing knowledge spontaneously and spontaneously carrying out learned skills. Other examples show, that when prompted by their peers, nearly all students are able to remember, judge applicability, or apply something learned earlier. The examples reported below come from students across the classrooms of five teachers as they engaged in the Apollo 13 introductory unit and Vehicles in Motion during Fall, 1999 and Winter, 2000. Vehicles in Motion is an 8-week unit that introduces students to Newton’s Laws of Motion as they engage in designing and constructing a miniature vehicle and its propulsion system. For more specifics on the unit, see Kolodner, et al. (1998) and (2003). Some students came from 8th-grade classes; some 6th grade. Most were from mixed-ability classes; a minority of students were from gifted and talented classrooms.

Day-to-day evidence of transferable learning: Spontaneous use of content knowledge

The first design challenge in the Apollo 13 launcher unit is to use index cards with paper clips to build a support that will hold up a textbook. Though the intent of the activity is for students to learn about collaboration and about design processes, there is some attention paid to the structures used to build the supports. Students quickly notice that creating columns provides the best method for support. Some students create triangular columns, others use round columns. The students walk from design to design during the gallery walks, watching carefully as each design is tested by balancing a book upon it to see if it will hold as the pages are turned. They experience the success or failure of different design concepts. They notice the several types of columns that their peers create, and they discuss the pros and cons of different shapes, sometimes bringing to bear, as well, the formal physics of structures.

Two design challenges later, the students were asked to create a bridge structure using plastic drinking straws and paper clips. In three different classes of three different teachers (two from the south end of the metro area (low SES), one from the north end (high SES)), the students spontaneously transferred what they learned about structures from the book support design to their plans for the bridge design. Our in-class observer (Fasse, field notes), observed one group, heads together over the table top, brainstorming ideas and madly sketching and waving their hands in the air in demonstration, when a boy blurted out, "Wait. No. No. Remember what worked best on the book support?" All heads turned toward him. A girl responded, "Oh, yeah, maybe we should spread the weight out." At this point the other two students joined in as the group returned to buzzing with ideas, ultimately they decided to "use trusses to even out the weight." In this same class, during the pin-up presentation, another group justified their design choices by applying what they had learned from observation of their everyday world: "The supports and braces are in an X because I’ve seen it on a lot of bridges and it looks sturdy."

In another class, a group struggled with coming up with an idea for the bridge that would build on everyone’s ideas. They were having a hard time sticking with any idea. A girl who was also experimenting independently with materials held up the bundled materials she had been messing about with. Referring to the book support design challenge, she said to her group, "Hey, why couldn’t we use the index card idea? You know how we bundled them for support?" There was silence as everybody pondered the suggestion. All remembered the bundling once she pointed it out to them. A discussion of the practicality of this idea and some experimentation with materials followed. In this same class, when asked to explain their design choices, another group responded, "The last time we had a gallery walk, we saw something like this, so we thought we’d use it."

We captured transfer of structural knowledge in three different classrooms. We know that students used several different concepts from the bookstand as they were working on their bridge designs — the idea of a triangle as a strong, stable shape; the idea of a truss; and the idea of distribution of weight. We know, as well, that there was a new bridge going up outside of one of the schools, and students discussed the bridge and its design and how they could apply that without prompting. Given Campione et al’s (1995) warnings about transfer, it is safe to guess that similar transfer was happening across other classrooms.

An example in the introduction to this paper illustrate spontaneous use of content knowledge in a situation of farther transfer. The girl who applied what she was learning in class to explaining why her mother’s car skidded was using her newly-gained knowledge about physics to explain a situation she had encountered outside the classroom.


Day-to-day evidence of transferable learning: Spontaneous use of science skills and engagement in science practices

Science fair projects are an annual event in many of our schools. A science fair project is an investigation designed, run, and reported by an individual student. Students choose topics that are of interest to them, and the teacher usually helps them narrow down their research question to something that is doable in a few weeks. There are a variety of approaches among the schools and teachers. Some schools and/or teachers require all students to participate, others allow participation to be voluntary, and still others might require all students to do a project but only submit them for judging by student request. Students in one of our eighth grade classes did particularly interesting science fair projects this year.

This eighth grade mixed-ability integrated science (physical, earth, life) class is in a rural-suburban school. They began the year with the Apollo 13 launcher unit and continued on with Vehicles in Motion. They had been through the whole LBD cycle several times — messing about, whiteboarding, pin-up sessions, gallery walks, designing and running experiments, extracting rules of thumb, and so forth. Their teacher did not want to spend a lot of time in class on science fair projects. She encouraged her students to create projects but told them that she would submit them for judging by the county only by request of the students. She dedicated only two periods per class (one day for research in the library, one day for planning) because she said, "This should be something they do extra, outside of class." She did, however, check their topics and set deadlines for submission, spending one classroom period per section on the subject. She also posted a chart of the elements of a traditional science report (i.e., hypothesis, experiment, conclusions). She did not give the students suggestions on how to set up their experiments or even how to arrange or present their work.

To her surprise, several of the projects appeared to include spontaneous application of LBD rituals and of the kinds of good scientific practice the students had been carrying out in the LBD classroom. We learned through interviews with the students that indeed they had engaged in purposeful application of what they had been learning and using in class — choosing to use what they had learned in their LBD class in their science fair projects.

One student included an LBD-like whiteboard in her written report. When asked by our researcher to explain her reason for including it, she answered, "Because I needed to show them [the reader or judge] what I knew before I did the experiment." "Where did you get the idea to include this?" she was asked. "Oh, from science class. See we did this in Mrs. X’s class and, well, to explain how you know stuff and what you need to know."

In similar fashion, another student included rules of thumb on her poster project that she had identified as a result of her experiments. When asked to explain why she included them, she answered, "Well, like, we learned this is Mrs. X’s class this year and these are the things that you find are kind of like standard."

Another student discovered mid-experiment that she had failed to control an important variable, she identified it in her report, and then told what she did about it. The teacher was surprised because she said she had never had a student recognize an uncontrolled variable before, or, if they had, they simply eliminated it from their report without giving it a second thought. The student was asked to explain her reason for her discussion of the uncontrolled variable. "Oh, that’s called fair testing. See, when you do an experiment, you have to have fair testing. And it is not fair testing if you are not using the right variable or if, like, there is some other variable and you don’t report it and deal with it." The researcher asked her why she didn’t just leave the variable out of the report once she recognized it as a mistake. She was incredulous at the suggestion. Clicking her tongue and rolling her eyes in disgust, she replied, "Because it wouldn’t be fair testing and would be not very good sci, well, it would be bad science. It would be wrong." (The other students at the table nodded emphatically in agreement, shaking their heads at the researcher’s ignorance.) She was then asked to explain how she had discovered this "fair testing" concept. "In science class. Mrs. X. Like whenever we do trials or like an experiment with our design projects, we have to make sure that we have the right variables and, you know, that we test the same way every time."

A fourth student was discussing what she had chosen NOT to include in her project. "I didn’t put in the part about, well, like when I was trying to figure out about the weights." She was asked to explain why it was okay to leave this out without violating the notion of fair testing. She answered, "Because it was not part of the experiment. It was, like, when I was trying to figure out what to use in the experiment, and I was just messing around, umm, well, yeah, uh, messing around. No, wait, that’s not it. Umm, what’s it called? Oh, yeah, messing about with the things I thought I might use and trying to figure out what would be best."

It’s interesting to see here that LBD practices as well as the standard practices of science became second nature to the students. Whiteboarding, they understand, is for the purpose of making clear what you know and what you need to learn. Rules of thumb, they understand, are for articulating what you’ve learned from an experiment. Experiments are run to learn something important, and it is important that they be carried out fairly or else you can’t believe the results. These students could recall from memory the procedures they should be following, could recognize and explain their applicability, but most importantly, they could carry them out with skill after several times through their enactment.

This ability to use science skills and engage in LBD rituals on their own was common across classrooms, though we have less data from classrooms we visited less often. In one classroom, for example, students insisted on several more iterations while working on the bridge challenge — they were beginning to understand the benefits of another iteration. In one class, some of the students were employing the whiteboard concept in other subjects as "a good way of getting organized and taking notes". We observed students refusing to settle for the obvious by continuing to question other students in an exchange of ideas and hypotheses gleaned from experiences (cases) and knowledge; everyone contributing as the discussion evolved and moved forward without a word from the teacher who sat in the back of the room watching.

What does transferable learning look like? Data from Performance Assessments

One of our means of assessing student capabilities is to administer performance tasks several times during the school year. That is, we give students a transfer task to work on, one that requires the skills and some of the knowledge from the unit they have been doing, and we ask them to work in small groups to address the task. We extract here a video-taped segment from one of those sessions to illustrate what spontaneous engagement in science practices looks like. In this case, the three students from an mixed-ability LBD class in a low-to-medium SES school are working together to design a procedure for accurately measuring velocity (part 1 of the Velocity Performance Assessment administered after completion of the Vehicles unit in 1999).

The directions for part 1 of this task are: "Design a procedure to accurately measure velocity, and write your description of your procedure with enough detail that someone else could use it." The three students in the group begin to read the directions and discuss with each other how to proceed.

B1- Okay, "plan an experiment…" Okay, we could…Okay, all you have to do is just put…[ink discussion]

B2- "Average speed" okay to get average speed…how do you get average speed?

B3- You have distance and time

B1- Use a stopwatch to find out how many seconds per meter.

B2- How do you get speed out of that, I forgot.

B1- I have it in my notebook. I have the formulas in my notebook.

B2- We need to figure it out.

B1- You write down whatever you’re gonna do. We’ve just got to figure out the experiment we’ll use a meter stick or a tape. And then we can use a stopwatch and then we can figure out.

There are two interesting things to notice here. First, while students didn’t remember the formula for calculating speed, they all seemed to remember that they had learned a formula and that it had something to do with distance and time. They also knew where to go to find the formula. Second, there were many procedural things they remembered how to do. Between them, they recognized that a stopwatch would be appropriate for measuring time and that a meter stick or tape would be good for measuring distance. They had measured the distance their balloon cars had traveled, and they had measured time for their parachutes to drop previously (and probably had measured time under other circumstances outside of class). B1, who seemed to be the leader, remembered what it means to plan an experiment — it means writing down "whatever you’re gonna do." In the next segment, the students discuss measurement issues and the ins and outs of running a well-designed experiment. Particularly interesting is their discussion of how many trials they need to run.

B2 reads directions out loud.

B2- We must have the measurements of the distance the car travels and the time that it took to travel. The average speed of the car. What if the car is not stopping? The car keeps going.

B1- You don’t have to put how far it went. Just put the speed of it. Just put how fast it goes.

B2- I know but I am trying to figure out speed.

B1- Distance divided by time.

B2- We must have a measuring device.

B2- That wouldn’t be the average speed.

B3- We just have to do it a couple of times.

B2- Just a couple of times and then divide by that number of times.

B1- You can find the speed of an object.

B3- It says average speed, John (name changed).

B1- Okay, just turn it on and let it go for about five seconds and then you’ll get the same thing about every time.

B3- How about we make it start here[indicates one side of the table] and end it here [points to the other end of the table].

B1- You can just write. Do the test about two or three times and it will come about the same. Cause it is not like your balloon cars, it’s got batteries.

The dialog about measuring distance has a variety of interesting characteristics. First is the process of remembering the concepts involved in deriving speed and the attention to how they would go about doing the measurement. Although they clearly don’t have a full understanding of "average speed" (thinking it is the average over the several trials they do rather than a calculation for each trial), and they are confusing the notion of average speed with averaging over a set of trials, they are clearly remembering and trying to apply several concepts and skills they’ve encountered — the ideas of needing to run several trials and to average over several trials and the need for the procedure in each trial to be identical each time it is run (B3’s indications that each trial should start at one specific place and end at another).

Perhaps the most wonderful suggestion of transfer in this segment is the use of their own experience with measuring their balloon car’s behavior. Students noted during that part of the unit that measurements were sometimes varying considerably across trials. This happened with balloon cars because it is very hard to control the variables exactly the same way for each trial (e.g., one can get as close as possible to blowing up the balloon the same amount each time, but one can’t get exact). The balloon-car case had a great deal of variance in measurements, and this was discussed in class. The students all recognized that multiple trials were needed. B1 refers to the balloon-car case as he is figuring out how many trials need to be run. He suggests "two or three times," fewer than were done with balloon cars, because he recognizes that there won’t be as much variance here ("Cause it’s not like your balloon cars; it’s got batteries.") The implication here is that battery propulsion should not have as much variance as a balloon propulsion system.

These two segments highlight the implicit skills these students seem to draw on while co-constructing their solution as they negotiated and distributed their work. We coded these segments as high on negotiation and distributed efforts, use and appropriateness of prior knowledge, use of science terms, use of science practices involved in experiment design, and self checks.

Looking further at this group’s interactions, we are also able to identify individual use of skills and knowledge. After writing down their individual procedures, these students had time left, and they decided to read their written response to each other.

B1- Okay, mine meets all this stuff. You want to hear what I got? Let me hear what you got [indicates B3]

B3- ‘To find the average speed of the car w/the formula distance divided by time, you use a meter tape to measure distance and a stopwatch to determine time. We will use inches and we will do this three or four times and find the average.’

B1- What do you have, B2?

B3- What do you have? [to B1]

B1- I’ve got ‘In our experiment we will have a measuring tape and a stopwatch. First we will turn on the toy car and stop it after we get to about 5 seconds using the stopwatch. Then we will use the measuring tape to see how [makes a correction on the paper] many inches it went in five seconds. When we have done all of this we will divide distance by time and then we will test it about three more times to see if it is about the same every time.’

B2- [reads his paper] ‘It will be necessary to have a measuring device and something to measure the time it took to travel that distance. After finding out the distance and time you divide distance by time. To find the average speed you must get data from more than one trial. Take this data, add them together, and divide by the number of trials. Using this toy battery car, our measuring unit will most likely be in inches. Turn on the car and let the car go as far as you would like and measure the distance. Stop the stopwatch when you stop the car. Do as many trials as you would like.’

There were some differences with respect to the level of detail each student described. However, it is obvious that co-construction occurred during the group discussion, and together these students came to a shared solution. Noteworthy is that B1 actually checked his response against the original directions and was pleased with himself. As he looked at the list of things to include from the directions, he said, "mine meets all this stuff…"


Evidence of better transfer among LBD students: Data from Performance Assessments, 1999-2000 and 2000-2001

To compare the capabilities of LBD and non-LBD students, we run performance assessment tasks in both LBD classes and matched comparison classrooms, matched for achievement level of the students, teacher understanding of the material, and socio-economic status. During 1999-2000, we ran performance assessments at the completion of the Vehicles unit, while during 2000-2001, we ran performance assessments both before that unit (after completion of the launcher unit) and after it. For all, students work in groups to design an experiment or procedure for investigating some issue of significance related to the content they’d been learning in class; then they run an experiment whose procedure we specify for them, and then they analyze the results of their experiment. We record their interactions on video and collect the papers with their plans, solutions, and analyses on them.

In 1999-2000, where we administered performance assessment tasks only after the Vehicles unit, we asked students to design an experiment that examines the effects of different types of rubber on different road conditions and the force needed to overcome sliding friction. We then asked them to run an experiment whose procedure we made available to them and to analyze the results. The task was taken from the PALS (1999) database. In 2000-2001, we administered performance assessment tasks after the launcher unit (approximately two months into the school year) and a separate one after the Vehicles unit. We used the rubber task after the launcher unit and a task that asks students to measure velocity of a toy car as the second task.

For all tasks, we analyzed student behavior on a variety of practice-related measures: negotiations during collaboration; distribution of the task; identification of prior knowledge; adequacy of prior knowledge mentioned; use of scientific terminology; several science practices (those used in designing experiments, running experiments, and analyzing results); and self checks (during experiment design, running experiments, and analysis). We assigned a score for each measure to each group, rating them on a likert scale of 1 - 5, with 5 being the highest score. Typically, a score of 5 means that all students in the group participated in the practice well, a score of 4 means that most did, 3 means that one or more did, 2 means they tried, and 1 means they did not participate.

In 1999-00, we compared a typically-achieving LBD class to a typically-achieving comparison class, and we compared a high-achieving LBD class with a high-achieving comparison class (Table 4), collecting our data between February and April of the school year (LBD data was collected earlier than non-LBD data). For both comparisons, LBD students engage in practices more readily than non-LBD students on a variety of dimensions. The typically-achieving classes had statistically significant differences in mean scores for the distributed and self-checks measures, and a non-significant trend for prior knowledge adequacy. In each case, the LBD means were higher than those for the comparison class. Comparing advanced-achieving LBD students with comparison advanced-achieving students, LBD means were significantly higher for negotiation, science practices, and self check measures (Gray, Camp, Holbrook & Kolodner, 2001). In both comparisons, LBD students show evidence of better ability to participate in collaboration (distribution and negotiation) and in their ability to plan and run experiments (self-checks and science practices) than do students in comparison classrooms. Perhaps most interesting, the scores of typically-achieving LBD students are indistinguishable from the scores of honors non-LBD comparison students. LBD students in typically-achieving classes learn as well or better than non-LBD honors students to participate in the practices of scientists, and through such participation, bring their solutions up to the level of (non-LBD) honors students.

Table 4: Results of Performance Assessments for 1999 - 2000: Means and Standard Deviations for Comparison and Learning by Design Students after the Vehicles content unit (Feb. to April of the school year)

Coding Categories


Typical Comparison

1999-2000 Typical LBD


Honors Comparison

1999-2000 Honors LBD


1.50 (.58)

3.00 (.82)**

t (6) = 3.00

2.33 (.58)

4.25 (.50)***

t (5) = 4.715

Science Practice

2.25 (.50)

2.75 (.96)

2.67 (.71)

4.75 (.50)***

t (4) = 4.648

Distributed Efforts

2.25 (.50)

3.25 (.50)*

t (6) = 2.828

3.00 (1.00)

4.00 (1.15)


1.50 (.58)

2.50 (1.00)

2.67 (.58)

4.50 (.58)***

t (5) = 4.158

Prior Knowledge adequate


1.50 (.58)


2.75 (.96)


2.67 (1.15)


3.50 (1.00)

Prior Knowledge

1.75 (.50)

2.25 (.50)

3.0 (.00)

3.75 (1.50)

Science Terms

1.75 (.50)

2.75 (.96)

2.67 (.71)

3.50 (1.00)

When we look more closely at the videos themselves, we see that students in LBD classrooms participate in collaboration that can be characterized by negotiation and the distribution of the work. Students in comparison classrooms work in groups without taking advantage of the unique possibilities when work is distributed or solutions negotiated, resulting in less flexible, more conceptually impoverished solutions.

In 2000-2001, we extended our findings by assessing students during a performance task early in the school year, after the completion of our launcher unit, and again later in the year after completion of Vehicles. This analysis allows us to determine the early-in-the-year effects of LBD — showing us whether the launcher unit is indeed promoting initial learning targeted skills and practices. It will also allow us to begin to describe the developmental progression of these skills and practices across time.

Table 5 presents the results of performance assessments that we completed two months into the school year — after the LBD launcher unit and in the same timeframe for non-LBD comparison classrooms. Here we show two sets of matched comparisons. Both LBD classes are typical-achievement classes, and each teacher and student population has been matched to a teacher with similar capabilities and students with similar achievement and SES. Comparing columns 2 and 3 and comparing columns 4 and 5, one can see that students in both LBD classes show significantly higher ratings than their non-LBD comparisons on self checks and science practices, and students in the LBD 2 condition (columns 4 and 5) showed significantly higher ratings on negotiations and distributed efforts as well, all after only two months into the school year and before the LBD students had covered significant content.

Table 5: Results of Performance Assessments for 2000-2001: Means and Standard Deviations for Comparison and Learning by Design Students after the Launcher (two months into the school year)

Coding Categories


Typical Comparison 1


Typical LBD 1



Comparison 2

2000- 2001 Typical LBD 2


1.00 (.00)

2.10 (.74)*

t (7) = 2.925

1.33 (.58)

3.00 (.79)**

t (6) = 3.141

Science Practice

1.20 (.50)

2.30 (.45)*

t (7) = 3.326

1.33 (.58)

2.30 (.45)*

t (6) = 2.677

Distributed Efforts

1.38 (.48)

2.40 (1.14)

1.67 (1.16)

3.90 (.82)**

t (6) = 3.234


1.25 (.29)

1.90 (1.03)

1.50 (.87)

3.70 (.84)***

t (6) = 3.558

Prior Knowledge adequate


1.88 (.25)


1.70 (.67)


1.83 (1.04)


2.70 (.96)

Prior Knowledge

2.00 (.00)

2.30 (.45)

2.00 (1.32)

2.40 (.82)

Science Terms

1.38 (.48)

1.50 (.48)

1.50 (.87)

2.00 (.71)


Table 6 shows the data as Table 4 and adds in late-in-the-year data from 2000-2001. As in 1999-2000, students in every LBD class were rated higher on self-checks and ability to carry out science practices than were their comparison peers. Notably, one LBD class (column 5) had ratings that were significantly higher than their comparison class (column 4) on every category coded. As in 1999-2000, the performance of LBD typical-achieving classes was on par with the non-LBD comparison honors class, while this one high-performing typical-achieving LBD class (column 5) outperformed the non-LBD comparison honors class (column 6) on seven out of eight dimensions.



What does it mean?

The data we’ve presented, both observational and quantitative, has for the most part been derived from analysis of students’ group interactions. We’ve shown that, indeed, when students work in groups with each other in an LBD classroom or on designing an experiment or procedure, they can collaboratively use their skills and apply what they’ve learned spontaneously and at appropriate times. Our data show that at least some LBD students are quite skillful at using their knowledge and skills to engage in the practices of scientists and that, on the whole, LBD students are better than their comparisons at using knowledge and skills to engage collaboratively in the practices of scientists.

But the transfer literature focuses on learning in individuals, and the only data we report here that gets at individual transfer is the science fair projects. What, then, are we showing? We refer back to Bransford et al.’s (1999) points and Campione et al.’s (1999) admonitions about transfer. Both emphasize the developmental and dynamic character of transfer. The ability to transfer as an individual requires remembering the right things at the right times, figuring out whether what is remembered is applicable and how to apply it, and applying it well. Our science fair project examples show that at least some students can do that for some of the science skills LBD aims to have them learn. The rest of our data shows two other things.

First, it shows evidence that LBD students, together as a group, can prompt each other in such ways that the group can engage in scientific practices. Not every member of every group has every skill, the ability to recognize that one is applicable, the ability to know how to apply it, and so on. But between them, working as a group, the students are able to prompt each other to remember which content and skills should be applied when and how to apply them. Higher scores on the performance assessments, recall, correspond to more of the group participating in the practices. For example, in our transcript presented above, one individual remembered the formula for computing velocity. The others in the group used his contribution to achieve the goal of designing a procedure for its measurement. Another student contributed knowledge about the practice of running multiple

Table 6: Results of Performance Assessments for 1999-2000 and 2000-2001: Means and Standard Deviations for Comparison and Learning by Design Students after the Vehicles unit (between Feb. and April)

Coding Categories


Typical Compar-ison

1999-2000 Typical LBD


Typical Compar-ison


Typical LBD


Honors Compar-ison

1999-2000 Honors LBD


Honors LBD


1.50 (.58)

3.00 (.82)**

t (6) = 3.00

1.30 (.67)

3.88 (1.03)*

t (7) = 5.548

2.33 (.58)

4.25 (.50)***

t (5) = 4.715

5.00 (.00)***

t ((3) = 6.197

Science Practice

2.25 (.50)

2.75 (.96)

1.40 (.89)

3.75 (1.32)*

t (7) = 3.188

2.67 (.71)

4.75 (.50)***

t (4) = 4.648

4.75 (.35)**

t (3) = 4.443

Distributed Efforts

2.25 (.50)

3.25 (.50)*

t (6) = 2.828

1.70 (.84

3.00 (.00)*

t (7) = 3.064

3.00 (1.00)

4.00 (1.15)

4.25 (.35)


1.50 (.58)

2.50 (1.00)

1.40 (.65)

2.88 (1.03)*

t (7) =2.631

2.67 (.58)

4.50 (.58)***

t (5) = 4.158

4.00 (.00)*

t (3) = 3.098

Prior Knowledge adequate


1.50 (.58)


2.75 (.96)


1.60 (.89)


3.88 (.75)*

t (7) = 4.059


2.67 (1.15)


3.50 (1.00)


4.25 (.35)

Prior Knowledge

1.75 (.50)

2.25 (.50)

1.60 (.89)

3.75 (.87)*

t (7) = 3.632

3.0 (.00)

3.75 (1.50)

3.75 (.35)

Science Terms

1.75 (.50)

2.75 (.96)

1.50 (.87)

2.88 (.63)*

t (7) =2.650

2.67 (.71)

3.50 (1.00)

4.00 (.00)


* = p < .03; ** = p < .02; ***= p < .01

N= groups where most groups consisted of 4 students each.

(Means are based on a likert scale of 1 - 5, with 5 being the highest rating)

Reliability for the coding scheme ranged from 82-100 percent agreement when two coders independently rated the tapes. For this set of data, a random sample of four - five tapes were coded for each teacher from one class period. Approximately 60 group sessions are represented in this table, representing 240 students.

trials. Others were then prompted to recall their experience with issues around this practice.

That LBD students score higher than their comparisons means that they use their skills collaboratively more effectively than do non-LBD students. While we don’t have evidence that every child in every high-scoring group can remember and apply content and skills, we do have evidence that, when reminded, LBD students engage at a higher rate in using what they have learned. The data show that LBD students, on the whole, are further on their way to being able to reach full transfer as individuals than are their comparisons.

Second, our data indicate the effects of learning collaboration skills by LBD students while engaging in practices. The general tone of these events with LBD groups is of mutual recognition that each has something to contribute. As individuals bring their diversity of skill levels to the group, there seems to be an implicit understanding that they will achieve more if they incorporate or negotiate each other's contributions. We don’t see that as prominently in the videos of comparison students. There also seems to be an additional important quality in the group interactions among LBD students — a sense of the other. The student in our example who asked his team to read their answers to each other deferred to the others before reading his own. He was so proud of and excited by his response that he nearly blurted it out. But then he stopped himself and reflected, it seemed, on the practice of asking for the others' contributions before he shared his.

What about data on individuals’ abilities to use what they’ve learned? We have data assessing individual understanding, and we are collecting data on individual experimental design and explanation of results. What we’ve analyzed of that data so far (Kolodner et al., submitted) shows that LBD students know as much or more than comparison students. However, we still need to analyze the data we have for individuals’ ability to use knowledge and skills they’ve learned, and we need to collect more systematic data about individuals’ ability to use skills they’ve learned to accomplish new tasks. Our intention in further research is to use performance assessment tasks do several sequences of examining both individuals’ and groups’ ability to use what they’ve learned over the course of a school year. Such analysis should help us see how individual and group capabilities change with respect to each other.

Getting to such success

It was less of a surprise to us that students were growing more competent at science practices and spontaneously practicing them than it was to their teachers, as we had designed LBD with such transfer in mind. It is important, however, to point out some of what the teacher needs to be doing in the classroom to promote such learning. Two practices by the teacher seem to be key:

Our observations show us that when the LBD teacher makes a point of adopting and incorporating the language and rituals of LBD, students follow suit. Eighth grade students quickly move towards using the terminology the teacher uses, modifying their language to fit the situation quite skillfully. When the teacher uses the vocabulary of LBD and science practice (e.g., collaboration, iteration, constraints, criteria, fair test, trials), the students do also. When the teacher helps students identify good uses of the vocabulary and good enactments of skills, the students gain real expertise. By the second design challenge of the Apollo 13 launcher unit, for example, students begin to use the terminology of LBD when they talk among themselves or when an adult asks for explanation. For example, one hears students arguing over whether a particular plan for a design meets the "criteria." Or, if asked to explain why they are doing something a certain way, they will respond, "Because we have a cost constraint."

At the beginning, students struggle with the concept of "collaboration," as it seems to them to be either stealing someone else’s ideas or relinquishing ownership of their own ideas. Usually midway through the Apollo 13 unit, and certainly by the end, even the most reluctant collaborators have been transformed into willing sharers of ideas–making suggestions, showing what they have, asking for help, crediting the origin of an idea gleaned from others. This comes from their experiencing what they can personally gain when they incorporate the ideas of others with their own and from their pride when they hear others giving them credit for ideas. This is most remarkable among students who have a history of being the top achievers. These students are used to relying only upon themselves to get it right, and they don’t like sharing the credit; the less successful students have formed a pattern of deferring to them. Because the LBD approach is dependent on collaboration both within and between groups, students abandon these habits as they come to rely on and learn from the ideas, observations, skills of their partners and classmates. As they come to understand the terminology, they often make the language more and more their own, sometimes being quite creative in its application. Students caught gossiping or engaging in off-task visiting, frequently smile good-naturedly at the teacher and claim to be "just collaborating, Mrs. Z" followed by gales of laughter.

Their employment of the concept of pin-up is similar. By eighth grade, students have been assigned poster projects ad infinitum. While it has unique features, the LBD pin-up is not unlike the poster projects with which they are familiar. And, yet, the students prefer to refer to them as "pin-ups". They know that pin-ups have a specific set of requirements (schematics, multiple views) and that they are reference tools posted around the classroom for generating ideas to be tested. It is very common to see students (and often the teacher as well) get up from the table where their group is working and move to a pin-up on the wall using it to make a point or a recommendation or to generate/justify a fresh idea for the others in the group. There is nearly perpetual movement around the classrooms as someone (students and/or teacher) seems always to be using a pin-up -- sometimes their own, sometimes another group’s – to spur the discussion, design, or argument. Early in this paper an example was given of two boys who formed their own group and used the pin-up ritual before beginning construction. The teacher responded to the researcher’s question of what role she had played in the boys’ decision to utilize the pinup, "Oh, they did that on their own. I didn’t assign them to work on anything. They just did it. They did the pinup because they know that is part of the design process…. Now, it wasn’t pretty or anything, but they did the sketches of their ideas."

The gallery walk, not wholly unlike the familiar show-and-tell, is a ritual with which students have a sort of love-hate relationship. They look forward to gallery walks for the opportunity to get ideas and learn from their classmates, yet they are rarely ready to give up on construction or testing when it is time to stop for a gallery walk. The collective groan heard when the teacher announces an impending gallery walk is related not to the ritual itself, but to the "but, Mr. M, we’re not ready yet" factor. However, once the gallery walk begins, the students leave their construction to move from table to table watching, listening, and making suggestions to their classmates’ ideas and then, in turn, to share what they have constructed with the other groups. Groups that are stymied will request solutions from the viewers who often offer very real help with the problem. They realize how valuable the gallery walk is for sharing ideas. One group explained their bridge construction as follows: "We saw this kind of thing, something like this, in a gallery walk and thought we’d like to try it on our own bridge." By the end of the LBD program, the gallery walk concept has become so comfortable to the students that they do not need an official or formal gallery walk to move from table to table getting or giving ideas. They simply do it on their own. They have also come to accept that the work does not have to be completed to be of value or to be presentable.

Concluding Thoughts

Learning that takes place in the context of trying to solve challenging problems that are linked to the world seems to be deep and enduring. If students feel they are making a contribution to the solution of real problems, they seem to engage completely in the learning activities (Barron, et al, 1998). As students acquire more and more opportunities to practice problem solving and reflect on the application of their effort to real solutions, they can’t help but build cases of their experience. Reasoning with cases is transfer. LBD is designed to be a rich environment for the scaffolding the development of case acquisition, and it attempts to make transfer happen routinely through its rituals.

It is common in the classrooms of teachers who want to promote skill learning to do significant project-based work with their students, where students carry out the skills in the context of an interesting project. Also common in middle-school classrooms is for teachers to have students work on a series of short, focused lab activities that repeat a set of inquiry and investigative skills again and again. But students don’t seem to gain as much enduring knowledge from these kinds of practices as their teachers would like (Mintzes, Wandersee & Novak, 1998). Project work is often done out of class, without discussion of and reflection on the skills students are using, and as a one-time activity. Short, focused lab activities are generally aimed at learning some science concept without the context of achieving any practical goal; students learn much science content, but they fail to learn the applicability of practices they are learning, when to use each, and how to plan towards sequencing practices in ways that will achieve a real-world kind of problem solution.

LBD is an attempt to infuse project-based and inquiry-based science classrooms with the kinds of practices that will promote transfer of what’s being learned to new situations. We want students to spontaneously remember a science concept and apply it and to spontaneously recognize the need for some practice and to use their skills and knowledge to engage productively in the practice. The transfer literature tells us many of the complexities involved in promoting transfer. Case-based reasoning provides for us a way of viewing transfer as spontaneous reminding and application, asking us to focus on classroom activities that promote the right kinds of encodings and much practice applying what one is learning and mindfully noticing one’s successes and explaining one’s failures so as to iteratively debug those encodings. We have evidence that students in LBD classes are exhibiting better transfer of skills — they use their skills to engage more expertly in the practices of scientists and designers.

Do we know that LBD activities are indeed promoting transfer for every student? Our examples from classroom observations don’t allow us to answer that question with empirical evidence, but we have reported a number of findings that suggest that transfer is happening in our LBD classrooms. Every student group we've evaluated from our sample is exhibiting some interesting transfer, and we continue to examine the trends in data collected from a much larger sample and comparison population with a second level of analysis. We are in the process of examining self checks as examples of clarification, elaboration and correction episodes. We are looking at each of our coding categories in more detail.

Do we know exactly which practices in the classroom are responsible for our results? We are currently investigating using our LBD classroom observations to formalize a way to assess the fidelity of the implementations. We can then examine our learning outcome data as a function of fidelity of implementation. From this kind of analysis, we would move to the position to link these results with particular classroom practice. We do know that exposure to similar situations over time and application of the same practices in those situations is a built-in feature of LBD and one that we’ve predicted would result in the kinds of transfer of practices that we see in our observations and our emerging data analyses.

Because LBD is based on our theoretical understanding of transfer, we are not surprised to witness these examples of transfer. Our rules of thumb tables are designed to help students associate conditions of applicability to what they are learning. Our gallery walks are designed to help students make connections between their intents, their solutions, and what actually happens. They are designed, as well, to give students a chance to have the range of experiences that will allow them to learn the richness of the concepts they are learning and their applicability. Extraction of rules of thumb about concepts they are learning as well as about the practices they are engaging in is meant to encourage learning and ability to transfer both concepts and practices. That the students engage in the same sequences of activities over and over in the context of different design challenges is meant to make it easy for them to derive schemas of the practices and the ways they connect to each other. The common experiences students have make it easy for teachers to remind students of what previous experiences they might consider as they are moving forward, providing students hints about which old experiences to recall, re-interpret, and re-encode for better accessibility later.

On the other hand, more research is needed to know for sure that the results we are seeing are common ones and to know that they are indeed a result of LBD's practices. And more research is needed to better understand promoting transfer. Our computational models from case-based reasoning provide a framework for understanding the processes involved in transfer; case-based reasoning indeed adds to our understanding of the processes that might be involved in transfer. We can use that model to help us devise ways of understanding human activity better, but we can’t assume that the model is correct in all details -- we must continue to test the model in the real world of human activity.

Can students acquire knowledge and reasoning skills that will be applied to new problems and become members of a cultural system? How can this be orchestrated? What kinds of knowledge structures connect these operations? How are they accessed for application to a new situation? We don’t have complete answers to any of these questions yet, but we suggest that case-based reasoning’s framework might help further our understanding. If knowledge is organized and structured as cases, the ubiquity of narrative structure is certainly implied. Suppose we asked students to read the narrative cases of experts in addition to engaging in their own design and modeling activities. What does it take for them to be able to liken those cases to their personal cases and to abstract rules of thumb from them? Using such cases as parts of our units, we might be able to systematically manipulate and examine the effect of a variety of factors with respect to case based reasoning. How does the sequencing of cases make a difference? What practices are needed to compensate if one can’t control that sequencing? Gallery walks provide students with contrastive sets of cases to draw rules of thumb and other generalizations from and to help them focus attention. Manipulating use of contrasting sets of expert cases in the context of an LBD unit might help us understand more about the role that contrastive sets need to play in promoting reuse, allowing us to make suggestions about the kinds of materials to make available with an LBD unit and the kinds of opportunities for teachers to notice and take advantage of when they occur naturally. Will expert cases or personal cases have the greater effect? Should we use expert cases at all? Many interesting questions can be generated from such a line of research.

Future research will need to address these questions in the ecological settings that learners participate in. That means it will need to consider two levels of transfer — the individual cognitive level and the socio-cultural level, where cognition is distributed across the group. Some of our more powerful examples of transfer have been embedded in the lived experience of our classroom communities — what are the contributions of the community to an individual’s ability to transfer?

Another important consideration for future research on transfer is accounting for the knowledge content domain and the learner's prior knowledge. Zimmerman (2000) provides a useful distinction between the various lines of research on scientific reasoning. She suggests the two main types of knowledge addressed in this literature can be summarized as domain specific knowledge about the content and concepts of science and the domain general strategies that should emerge as part of the process of doing science. She notes, however, that these general reasoning strategies are not likely to be isolated in tasks that are "knowledge-lean "(2000, p.139). As others have recognized, general strategies emerge out of repeated experience in problems rich in domain content. To fully understand the emergence of scientific reasoning skill, prior knowledge will need to be included in the picture. Are cases one way that students might organize and access their knowledge? The challenge will be to continue to explore the role of prior knowledge in the acquisition of new knowledge as the learner engages in authentic problem solving. The lens of case-based reasoning has helped us to understand how to promote transfer; case-based reasoning’s computational models of cognition may provide a powerful tool in investigating transfer in more detail.


American Association for the Advancement of Science (AAAS). (1993). Benchmarks for science literacy. Project 2061: Science of all Americans. Washington, DC: Author.

Anderson, J.R., L.M. Reder, and H.A. Simon (1996). Situated learning and education. Educational Researcher, 25:4 (May), pp. 5-11.

Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.

Barron, B. J., Schwartz, D. L., Vye, N. J., Moore, A., Petrosino, A., Zech, L., Bransford, J. D., & The Cognition and Technology Group at Vanderbilt. (1998). Doing with understanding: Lessons from research on problem and project-based learning. Journal of the Learning Sciences, Vol.8, pp. 271-265.

Barrows, H. S. (1985). How to design a problem-based curriculum for the preclinical years. NY: Springer.

Bereiter, C. and Scardamalia, M. (1993). Surpassing ourselves: An inquiry into the implications of expertise. Chicago, IL: Open Court.

Bransford, John, Brown, Ann & Cocking, Rodney (Eds.) (1999). How People Learn: Brain, Mind, Experience, and School. National Academy Press: Washington, DC.

Bransford, J. D., and Stein, B.S. (1993). The ideal problem solver (2nd Ed.). New York: Freeman.

Bransford, J.D., L. Zech, D. Schwartz, B. Barron, N.J. Vye, and Cognition and Technology Group at Vanderbilt (1998). Designs for environments that invite and sustain mathematical thinking. In Cobb, P. (Ed.), Symbolizing, Communicating, and Mathematizing: Perspectives on Discourse, Tools, and Instructional Design, Mahwah, NJ: Erlbaum.

Brown, A. L. (1992). Design Experiments. Journal of the Learning Sciences, Vol 2, pp. 141-178.

Brown, A.L., J.D. Bransford, R.A. Ferrara, and J.C. Campione (1983). Learning, remembering, and understanding. Pp. 78-166 in Handbook of Child Psychology: Vol. 3 Cognitive Development (4th ed.), J.H. Flavell and E.M. Markman, eds. New York: Wiley.

Campione, J. C., Shapiro, A. M. & Brown, A. L. (1995). Forms of Transfer in a Community of Learners: Flexible Learning and Understanding. In McKeough, A., J. Lupart & A. Marini (Eds.). Teaching for Transfer: Fostering Generalization in Learning. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

Cognition and Technology Group at Vanderbilt (1997). The Jasper Project: Lessons in Curriculum, Instruction, Assessment, and Professional Development. Mahwah, NJ: Erlbaum.

Collins, A, J.S. Brown, & S. E. Newman (1989). Cognitive Apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, Learning, and Instruction: Essays in Honor of Robert Glaser. Hillsdale, NJ: Erlbaum. Pp. 453 — 494.

Dagher, Z. R. (1998). The case for analogies in teaching science for understanding. In J.J. Mintzes, J. H. Wandersee and J. D. Novak (Eds.). Teaching science for understanding: A human constructivist view. (195-212). New York: Academic Press.

Ericsson, K.A., R.T. Krampe, and C. Tesch-Romer (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review 100:363-406.

Gick, M.L., and K.J. Holyoak (1980). Analogical problem solving. Cognitive Psychology 12:306-355.

Gick, M.L. and K. J. Holyoak (1983). Schema induction and analogical transfer. Cognitive Psychology 15:1-38.

Gray, J. T., Camp, P. J., Holbrook, J., & Kolodner, J. L., (2001). Science talk as a way to

assess student transfer and learning: Implications for formative assessment. Paper to be presented at the Meeting of the American Educational Research Association, Seattle, WA.

Greeno, J. G. (1997). On Claims that Answer the Wrong Questions. Educational Researcher, 26:1 (Jan./Feb.), pp. 5-17.

Hammond, K. J. (1989). Case-Based Planning. Academic Press: New York.

Hallinger, P., K. Leithwood, and J. Murphy, eds. (1993). Cognitive Perspectives on Educational Leadership. New York: Teachers College Press, Columbia University.

Hmelo, C.E., Holton, D.L., Kolodner, J.L. (2000). Designing to Learn about Complex Systems. Journal of the Learning Sciences, Vol. 9, No. 3.

Holbrook, J. & Kolodner, J.L. (2000). Scaffolding the Development of an Inquiry-Based (Science) Classroom. In Proceedings, International Conference of the Learning Sciences 2000 (ICLS).

Holyoak, K.J. (1984). Analogical thinking and human intelligence. Pp. 199-230 in Advances in the Psychology of Human Intelligence (Vol. 2), R.J. Sternberg, ed. Hillsdale, NJ: Erlbaum.

Holyoak, Keith & Thagard, Paul (1995). Mental Leaps. MIT Press: Cambridge.

Klahr, D. and S.M. Carver (1988). Cognitive objectives in a LOGO debugging curriculum: Instruction, learning, and transfer. Cognitive Psychology 20:362-404.

Kolodner, J. L. (1983). Maintaining Memory Organization in a Memory of Events. Cognitive Science, Vol. 7, No. 4, Oct., 1983.

Kolodner, J.L. (1993). Case-Based Reasoning. San Mateo, CA: Morgan Kaufmann.

Kolodner, J.L. (1997). Educational Implications of Analogy: A View from Case-Based Reasoning. American Psychologist, Vol. 52, No. 1, pp. 57-66.

Kolodner, J. L., David Crismond, Jackie Gray, Jennifer Holbrook, Sadhana Puntambekar (1998). Learning by Design from Theory to Practice. Proceedings of ICLS 98. Charlottesville, VA: AACE, pp. 16-22.

Kolodner, J. L., Crismond, D., Fasse, B. B., Gray, J.T., Holbrook, J., Ryan, M. Puntambekar, S. (2002). Problem-Based Learning Meets Case-Based Reasoning in the Middle-School Science Classroom: Putting a Learning-by-Design Curriculum into Practice. Journal of the Learning Sciences. Vol. 11.

Kolodner, J. L. & J. Holbrook (submitted). Integrating Technology Education and Science Education through Learning by Design.

Koschmann, T. D., Myers, A. C., Feltovich, P. J., & Barrows, H. S. (1994). Using technology to assist in realizing effective learning and instruction: A principled approach to the use of computers in collaborative learning. Journal of the Learning Sciences, 3, 225-262.

Kuhn, D. (1997). The view from giants' shoulders. In L. Smith, J. Dockrell & P. Tomlinson (Eds.) Piaget, Vygotsky and beyond: Future issues for developmental psychology and education. New York:Routledge.

Mintzes, J.J., Wandersee J. H. and Novak ,J. D. (Eds.). (1998). Teaching science for understanding: A human constructivist view. New York: Academic Press.

National Research Council (1996). National science education standards. Washington DC: Author.

Nersessian, N. (1992). How do scientists think? Capturing the dynamics of conceptual change in science. In .R. Giere (Ed.), Cognitive models of science (3-44). Minnesota Studies in the Philosophy of Science, vol. XV. Minneapolis, MN: University of Minnesota Press.

Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

Palinscar, A.S., and A.L. Brown (1984). Reciprocal teaching of comprehension monitoring activities. Cognition and Instruction 1:117-175.

PALS (1999).

Pennington, C. & Rhew, C., (2000). Divas of Design: Implementing Learning by Design™. Paper presented at the Meetings of the Georgia State Science Teachers Association. February, Macon, GA.

Perfetto, G.A., J.D. Bransford, and J.J. Franks (1983). Constraints on access in a problem solving context. Memory and Cognition 11:24-31.

Redmond, M. (1992). Learning by Observing and Understanding Expert Problem Solving. Unpublished Ph.D. thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA.

Riesbeck, C. K. & Schank, R. C. (1989). Inside Case-Based Reasoning. Mahwah, NJ: Erlbaum.

Salomon, G. & Perkins, D. N. (1989). Rocky roads to transfer: Rethinking mechanisms of a neglected phenomenon. Educational Psychologist, 24, 113-142.

Scardamalia, M., C. Bereiter, and R. Steinbach (1984). Teachability of reflective processes in written composition. Cognitive Science 8:173-190.

Schank, R. C. (1982). Dynamic Memory. Cambridge University Press: New York.

Schank, R. C. (1999). Dynamic Memory Revisited. Cambridge University Press: New York.

Schoenfeld, A.H. (1983). Problem solving in the mathematics curriculum: A report, recommendation and an annotated bibliography. Mathematical Association of America Notes, No. 1.

Schoenfeld, A. H. (1985). Mathematical Problem Solving. Orlando, FL: Academic Press.

Schoenfeld, A. H. (1991). On mathematics as sense-making: An informal attack on the unfortunate divorce of formal and informal mathematics. Pp. 311-343 in Informal Reasoning and Education, J.F. Voss, D.N. Perkins, and J.W. Segal, eds. Hillsdale, NJ: Erlbaum.

Singley, K. and J.R. Anderson (1989). The Transfer of Cognitive Skill. Cambridge, MA: Harvard University Press.

Spiro, R.J., P.L. Feltovich, M.J. Jackson, and R.L. Coulson (1991). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. Educational Technology 31(5):24-33.

Tulving E. & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352-373.Williams, S.M. (1992). Putting case-based instruction into context: Examples from legal and medical education. The Journal of the Learning Sciences 2(4):367-427.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20. 99-149.