“Productive Failure”: A Teaching Method Which Leads to Short Term Failure, but Long Term Success

My parents would sometimes take me on drives in the countryside, with the intention of getting lost and finding their way back home. It always worked out fine with my dad at the wheel, but it seems that I did not inherit his good sense of direction, and given a choice, will tend to take the exit or fork in the road that takes me further away from home.

So I was that early-adopter nerd who was obsessively printing out MapQuest maps in both directions anytime I had a driving trip planned. And who to this day pops out of the subway in NYC looking slightly frazzled until I can figure out which direction is which.

Of course, the upside of getting lost a lot is that you discover new things and places that you otherwise would not have known existed. You learn more.

A similar phenomenon occurs in teaching. Or more specifically, in learning.

Traditionally, teaching looks something like this:

  1. Explain how to do something (lecture).
  2. Show them what it looks like (demonstration).
  3. Fix their off-target attempts, to help them get it right as quickly as possible, and reward them for their successes (feedback).

This sequence tends to emphasize getting to the correct answer as expeditiously as possible. It’s how our schools are often set up. It’s how many of us were taught. And it’s how we parent as well.

The tell-show-do model makes a lot of sense – and it works pretty efficiently. Yet there’s little room or time for exploration, floundering around in the dark, and discovery. And growing evidence suggests that the experience of being lost may actually facilitate a deeper grasp of the material in the long run – even though at first, it looks like a hot mess.

“Productive failure”

A pair of researchers (Kapur & Bielaczyc, 2011) conducted a study of “productive failure” to see if this method would lead to greater learning than the traditional teaching approach (“direct instruction”).

They took two 7th grade classrooms, and gave them a 30-minute, 9-question pretest to see how much they already knew about how to calculate average speed1.

But then their learning experience began to diverge.

Direct instruction

One class began learning about average speed with a lecture. The teacher explained the concepts, worked through some examples, encouraged questions, then had students solve practice problems. Then they went over the problems and discussed the solutions. For homework, they were assigned similar problems in their workbook.

The problems ranged from simple to moderate in difficulty, but were essentially plug-and-chug-type questions. Here’s an example:

Jack walks at an average speed of 4 km/hr for one hour. He then cycles 6 km at 12 km/hr. Find his average speed for the whole journey.

They repeated this lecture-practice/homework-feedback process for 7 class periods.

Pretty typical-sounding process, right?

Productive failure

The other class was split up into small groups, and each was tasked with solving two complex problems like below:

Hummingbirds are small birds that are known for their ability to hover in mid-air by rapidly flapping their wings. Each year they migrate approximately 9000 km from Canada to Chile and then back again. The Giant Hummingbird is the largest member of the hummingbird family, weighing 18-20 gm. It measures 23cm long and it flaps its wings between 70-80 times per minute. For every 18 hours of flying it requires 6 hours of rest. The Broad Tailed Hummingbird flaps its wings 100-125 times per minute. It is approximately 10- 11 cm long and weighs approximately 3-4 gm. For every 12 hours of flying it requires 12 hours of rest. If both birds can travel 1 km for every 550 wing flaps and they leave Canada at approximately the same time, which hummingbird will get to Chile first?

They were unleashed on these problems with no teacher support or guidance, but simply given two class periods to try to solve each problem (4 classes total). They were given no homework, but did have extra problems to work on individually after completing each of the group problems (2 class periods).

After 6 sessions of working on their own, the class spent their final class session sharing their work with the teacher and each other – their solutions, strategies, and approaches to solving the complex problems. It was only then, that the teacher finally explained how to approach and solve the problem the “correct” way and assisted the students in going back through the problems and arriving at the correct answer.

So all in all, they spent 7 class sessions learning how to calculate average speed, exactly like the direct instruction group.

Then, the posttest

Following the completion of the 7 classes, both classes took a 35-minute, 5-item post-test, which consisted of 3 simple problems (like the ones the direct instruction group worked on), 1 complex problem (like the one the productive failure group had to do), and one type of problem that neither of them had done (basically, answer the question and pick which graphic best represents the answer).

So…how’d they do?

If we’re talking about success in conventional terms – as in, can students learn how to solve relatively straightforward problems with a teacher’s guidance and feedback – then the results were pretty clear.

Based on homework scores, the direct instruction group averaged a score of 91.4% on their homework.

The productive failure group on the other hand, performed miserably on their unguided attempts to solve the complex problems, with only 2 out of the 12 groups (16%) arriving at the correct solutions. And when working on the individual problems, their average score was even worse (11.5%2).

But wait! A very different picture emerges when you look at the posttest scores.

Unlike the homework problems, which were pretty straightforward, the posttest included both simple and complex problems. And in both cases, the productive failure group outscored the direct instruction group by a significant margin.

On the simple problems, the productive failure group earned an average score of 84.8% (vs. 75.3% for the direct instruction group).

On the complex problem, the productive failure group earned an average score of 59.7% (vs. 42.4% for the direct instruction group).

Short term performance vs. long term learning

Students often ask for help before trying to solve problems on their own. And teachers are in turn accustomed to providing help (it’s certainly faster and more efficient in the short term to offer the right fix, technique, etc.), rather than withholding the right answer or strategy and letting the student struggle, searcher, and look in all the “wrong” places.

So in much the way that spaced, random, and variable practice lead to worse performance in the short term, but better performance in the long term, it seems that the goal of productive failure is not to get the correct answer via shallower learning (“unproductive success”), but instead, to cultivate a deeper understanding of the fundamental principles and various ways of arriving at a solution regardless of short-term performance.


Furthermore, it seems that the productive failure approach also increases engagement in the learning process, at least exhibited by the following quotes from teachers involved in the study:

“I was not only surprised by the kinds of ideas and methods students developed to solve the problems but also their ownership of their ideas…I mean, during the consolidation, I could see that they really wanted to know why their methods did not work, or how someone else’s method was better, and how the “correct” way of solving the problem was better…”

“in our usual lessons, they simply accept what we tell them, our explanations and stuff, this is how to do it and they just take it…but here, they were not ready to just take our explanations so easily, they wanted to defend their ideas and not give up without a fight sort of…I mean, not a fight but you know there was this engagement in understanding why, why, why…”

Take action

How have you found ways of applying this concept to your teaching approach (or how could you, if not already)?

One example that comes to mind is fingerings and bowings. I remember one of my early formative teachers withholding fingering recommendations when I was still quite young, encouraging me to come up with some on my own.

I felt totally lost at the time, and the idea of having to pull fingerings seemingly out of thin air was completely foreign. I thought I was supposed to do whatever was printed in the music, or whatever she gave me. I felt lost for some time, and I came up with some pretty funky ones, but over the years, I came to take great pride in thinking up clever fingerings and bowings designed to enhance the music or make things easier to execute.

Additional reading

For more on the peril of being too helpful in lessons, here’s a great piece written by Robert Duke: Their Own Best Teachers: How We Help and Hinder the Development of Learners’ Independence @Music Educators Journal

Want a printable copy? Save this article as a PDF.

Download a PDF version to read later or share with a colleague or student.


  1. The classes’ scores were not significantly different from each other, and they were both taught by the same math teacher, so everyone started off on pretty even ground.
  2. Which I believe is an indicator of the percentage of students who were able to solve the extension questions correctly on their own, but I couldn’t tell for sure from the description in the study.

Ack! After Countless Hours of Practice...
Why Are Performances Still So Hit or Miss?

For most of my life, I assumed that I wasn’t practicing enough. And that eventually, with time and performance experience, the nerves would just go away.

But in the same way that “practice, practice, practice” wasn’t the answer, “perform, perform, perform” wasn’t the answer either. In fact, simply performing more, without the tools to facilitate more positive performance experiences, just led to more negative performance experiences!

Eventually, I discovered that elite athletes are successful in shrinking this gap between practice and performance, because their training looks fundamentally different. In that it includes specialized mental and physical practice strategies that are oriented around the retrieval of skills under pressure.

It was a very different approach to practice, that not only made performing a more positive experience, but practicing a more enjoyable experience too (which I certainly didn’t expect!).

If you’ve been wanting to perform more consistently and get more out of your daily practice, I’d love to share these research-based skills and strategies that can help you beat nerves and play more like yourself when it counts.

Click below to learn more about Beyond Practicing, and start enjoying more satisfying practice days that also transfer to the stage.


The weekly newsletter!

Join 45,000+ musicians and get the latest research-based tips on how to level up in the practice room and on stage.



Discover your mental strengths and weaknesses

If performances have been frustratingly inconsistent, try the 4-min Mental Skills Audit. It won't tell you what Harry Potter character you are, but it will point you in the direction of some new practice methods that could help you level up in the practice room and on stage.

You'll also receive other insider resources like the weekly newsletter and a special 6-day series on essential research-based practice strategies that will help you get more out of your daily practice and perform more optimally on stage. (You can unsubscribe anytime.)

Download a

PDF version

Enter your email below to download this article as a PDF

Click the link below to convert this article to a PDF and download to your device.

Download a

PDF version

All set!