My parents would sometimes take me on drives in the countryside, with the intention of getting lost and finding their way back home. It always worked out fine with my dad at the wheel, but it seems that I did not inherit his good sense of direction, and given a choice, will tend to take the exit or fork in the road that takes me further away from home.
So I was that early-adopter nerd who was obsessively printing out MapQuest maps in both directions anytime I had a driving trip planned. And who to this day pops out of the subway in NYC looking slightly frazzled until I can figure out which direction is which.
Of course, the upside of getting lost a lot is that you discover new things and places that you otherwise would not have known existed. You learn more.
A similar phenomenon occurs in teaching. Or more specifically, in learning.
Traditionally, teaching looks something like this:
- Explain how to do something (lecture).
- Show them what it looks like (demonstration).
- Fix their off-target attempts, to help them get it right as quickly as possible, and reward them for their successes (feedback).
This sequence tends to emphasize getting to the correct answer as expeditiously as possible. It’s how our schools are often set up. It’s how many of us were taught. And it’s how we parent as well.
The tell-show-do model makes a lot of sense – and it works pretty efficiently. Yet there’s little room or time for exploration, floundering around in the dark, and discovery. And growing evidence suggests that the experience of being lost may actually facilitate a deeper grasp of the material in the long run – even though at first, it looks like a hot mess.
A pair of researchers conducted a study of “productive failure” to see if this method would lead to greater learning than the traditional teaching approach (“direct instruction”).
They took two 7th grade classrooms, and gave them a 30-minute, 9-question pretest to see how much they already knew about how to calculate average speed1.
But then their learning experience began to diverge.
One class began learning about average speed with a lecture. The teacher explained the concepts, worked through some examples, encouraged questions, then had students solve practice problems. Then they went over the problems and discussed the solutions. For homework, they were assigned similar problems in their workbook.
The problems ranged from simple to moderate in difficulty, but were essentially plug-and-chug-type questions. Here’s an example:
They repeated this lecture-practice/homework-feedback process for 7 class periods.
Pretty typical-sounding process, right?
The other class was split up into small groups, and each was tasked with solving two complex problems like below:
They were unleashed on these problems with no teacher support or guidance, but simply given two class periods to try to solve each problem (4 classes total). They were given no homework, but did have extra problems to work on individually after completing each of the group problems (2 class periods).
After 6 sessions of working on their own, the class spent their final class session sharing their work with the teacher and each other – their solutions, strategies, and approaches to solving the complex problems. It was only then, that the teacher finally explained how to approach and solve the problem the “correct” way and assisted the students in going back through the problems and arriving at the correct answer.
So all in all, they spent 7 class sessions learning how to calculate average speed, exactly like the direct instruction group.
Then, the posttest
Following the completion of the 7 classes, both classes took a 35-minute, 5-item post-test, which consisted of 3 simple problems (like the ones the direct instruction group worked on), 1 complex problem (like the one the productive failure group had to do), and one type of problem that neither of them had done (basically, answer the question and pick which graphic best represents the answer).
So…how’d they do?
If we’re talking about success in conventional terms – as in, can students learn how to solve relatively straightforward problems with a teacher’s guidance and feedback – then the results were pretty clear.
Based on homework scores, the direct instruction group averaged a score of 91.4% on their homework.
The productive failure group on the other hand, performed miserably on their unguided attempts to solve the complex problems, with only 2 out of the 12 groups (16%) arriving at the correct solutions. And when working on the individual problems, their average score was even worse (11.5%2).
But wait! A very different picture emerges when you look at the posttest scores.
Unlike the homework problems, which were pretty straightforward, the posttest included both simple and complex problems. And in both cases, the productive failure group outscored the direct instruction group by a significant margin.
On the simple problems, the productive failure group earned an average score of 84.8% (vs. 75.3% for the direct instruction group).
On the complex problem, the productive failure group earned an average score of 59.7% (vs. 42.4% for the direct instruction group).
Short term performance vs. long term learning
Students often ask for help before trying to solve problems on their own. And teachers are in turn accustomed to providing help (it’s certainly faster and more efficient in the short term to offer the right fix, technique, etc.), rather than withholding the right answer or strategy and letting the student struggle, searcher, and look in all the “wrong” places.
So in much the way that spaced, random, and variable practice lead to worse performance in the short term, but better performance in the long term, it seems that the goal of productive failure is not to get the correct answer via shallower learning (“unproductive success”), but instead, to cultivate a deeper understanding of the fundamental principles and various ways of arriving at a solution regardless of short-term performance.
Furthermore, it seems that the productive failure approach also increases engagement in the learning process, at least exhibited by the following quotes from teachers involved in the study:
“I was not only surprised by the kinds of ideas and methods students developed to solve the problems but also their ownership of their ideas…I mean, during the consolidation, I could see that they really wanted to know why their methods did not work, or how someone else’s method was better, and how the “correct” way of solving the problem was better…”
“in our usual lessons, they simply accept what we tell them, our explanations and stuff, this is how to do it and they just take it…but here, they were not ready to just take our explanations so easily, they wanted to defend their ideas and not give up without a fight sort of…I mean, not a fight but you know there was this engagement in understanding why, why, why…”
How have you found ways of applying this concept to your teaching approach (or how could you, if not already)?
One example that comes to mind is fingerings and bowings. I remember one of my early formative teachers withholding fingering recommendations when I was still quite young, encouraging me to come up with some on my own.
I felt totally lost at the time, and the idea of having to pull fingerings seemingly out of thin air was completely foreign. I thought I was supposed to do whatever was printed in the music, or whatever she gave me. I felt lost for some time, and I came up with some pretty funky ones, but over the years, I came to take great pride in thinking up clever fingerings and bowings designed to enhance the music or make things easier to execute.
For more on the peril of being too helpful in lessons, here’s a great piece written by Robert Duke: Their Own Best Teachers: How We Help and Hinder the Development of Learners’ Independence @Music Educators Journal
- The classes’ scores were not significantly different from each other, and they were both taught by the same math teacher, so everyone started off on pretty even ground.
- Which I believe is an indicator of the percentage of students who were able to solve the extension questions correctly on their own, but I couldn’t tell for sure from the description in the study.