As we approach the end of a semester, and the end of another school year, I’m reminded of how it felt trying to study for my first big test in college – the Music History 101 final exam. The textbook was ginormous, I had a whole notebook full of notes, and I just remember being a bit overwhelmed, and having no idea how to approach studying for a cumulative test like this in the limited time I had available.
Because much like practicing, I assumed that studying just meant reading and reviewing something in your head over and over until it was burned into your brain somehow. And that takes a ton of time.
So not really knowing what else to do, I read and re-read the book, reviewed and re-reviewed my notes, and tried to familiarize myself with the material as best as I could.
Which evidently, was not the best approach, because I ended up with a C on the final. Or maybe a C-? But in any case, it wasn’t good.
A couple years later, I took a class on cognitive psychology, and I remember the professor suggesting that we try out several study strategies that I’d never used before. They sounded pretty simple, and even a little obvious. Like spacing study sessions out over time (duh), sleeping instead of cramming in an all-nighter (hmm…), and testing yourself, whether it be with the questions already printed in the chapters, or by creating your own test questions to quiz yourself with (err…?).
I pretty much ignored all of this advice, of course, and just continued to use my read and re-read strategy. Not because I didn’t believe my professor, but partly because it seemed to me that these were more advanced study strategies. Like, sure, those strategies sound great, but you have to do the reading/re-reading study strategy first, right?
Of course, I always seemed to run out of study time, and never got to those “advanced” strategies. So should I have listened to my professor and used those strategies right from the start?
Three approaches to studying
A team of researchers (Ebersbach et al., 2020) recruited 82 students, who attended a developmental psychology lecture, learning new and unfamiliar material.
20 minutes before the end of the lecture, participants were told that there’d be an “extra learning phase” that would help them memorize what they had just learned.
Participants were randomly separated into three groups, and given 10 slides from the lecture, with specific instructions on how to study the material.

One group – the generating questions group – was asked to generate a test question (and answer) for each slide, based on the bolded keywords.
The testing group on the other hand, was asked to answer questions that were already prewritten by the experimenters. And if they couldn’t come up with the answer on their own, they were allowed to refer to the slides to look up the answer.
The final group – the restudy group – was asked to go through all of the slides and memorize the content.
A pop quiz!
Participants may have assumed that that was the end of their involvement in this study, but one week later, to see how much of the material had actually made it into their long-term memory, everyone was given a surprise 10-question test of the material, with no warning in advance.
Five of the questions were the exact questions that the testing group answered in their study session. Questions like “What are indicators of a Theory of Mind at the end of children’s first year of life?”
The other five questions were “transfer” questions. Questions that still referred to the same slides and bolded words, but required the participant to make generalizations or inferences to other contexts. Like, “What is the benefit of understanding pointing gestures for young children? Name one function!”
So what sort of effect did the different study strategies have on test performance?
The results
The restudying group that read and re-read the slides had an average score of 45%.
The group that was asked to answer test questions had an average score of 56%.
And the group that was asked to generate their own questions and answers also scored around 56%.
It was a little surprising to me that the generating questions group didn’t score any higher than the testing group, but to be fair, they might have been at a slight bit of a disadvantage, given that the test question group did answer five of the same questions in their study session the week prior…
In any case, both of the test question groups outperformed the restudying group by about 11% on both types of questions. Which if you extrapolate that out a bit, could potentially be like a letter grade difference.
So…why is generating your own test questions a more effective study strategy than reviewing lecture material?
Why is this a better way to study?
The authors describe a couple potential reasons.
For one, having to create your own questions and answers forces you to process the material in a deeper way than you would if you’re just reading the content.
Also, there seems to be something significant about having to rephrase or restate a concept in your own words. The researchers note that this creates greater “representational variability” of the material. Which is a fancy way of saying that having to reconceptualize or make sense of a concept in your own words creates more potential pathways for retrieving the content.
Takeaways
So in terms of the application of this to music, yeah, the idea of creating your own test questions might not apply quite so directly to procedural skills like playing an instrument.
But I do think the general idea of testing yourself could apply to the memorization process. Where you test your ability to navigate the various performance cues you may have created (eh? performance cue? what’s that?).
And it certainly makes sense to test yourself in other ways too, whether it’s doing a recorded run-through first thing in the morning. Or playing for a friend without much of a warmup to simulate a performance.
But the real application of this is in classroom settings, whether it’s studying for finals in college, or trying to learn a concept more deeply in a continuing ed class.
And if you happen to be a teacher, the researchers made one suggestion that I thought might actually be pretty fun to try. They said that they’ve heard anecdotal reports of at least a few teachers who have asked students to generate their own test questions during lectures. And to get students excited or motivated to actually do this, they offer to include some of these student-generated questions on the actual exam.
References
Ebersbach, M., Feierabend, M., & Nazari, K. B. B. (2020). Comparing the effects of generating questions, testing, and restudying on students’ long‐term recall in university learning. Applied Cognitive Psychology, 34(3), 724–736. https://doi.org/10.1002/acp.3639