Yesterday Robin Matthews tweeted a link to an article in The Guardian that says there’s apparently no evidence to back “discovery learning.”
I felt like that sort of went against the whole #mtbos ethos and Dan Meyer’s idea of being less helpful, so I was curious to delve in a bit more. Robin linked me to this scholarly article:
Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching
which has a sort of in-your-face title. Failure? Ok then. The article is certainly interesting. It advocates for direct instruction, specifically the use of worked examples, especially with lower-performing students, and suggests that inquiry-based learning is helpful for only top performers and is even harmful for lower performers, with students knowing much less in the end.
One thing that stuck out to me was this phrase “unguided” or “minimally guided” instruction. Is that what we advocate or mean when we talk about investigations and inquiry-based learning?
The way I see it is that our job as teachers designing the tasks is to have the end goal in mind (what we want the students to discover) and also imagine in advance all the things the students will struggle with. Therefore we’re always scaffolding the discovery learning with pointed questions and sometimes hints along the way to point the students in the right direction. But feedback during the task is also critically important. The article is clear that when students are going the wrong way we should step in, otherwise they may codify mistakes that are later hard to shake.
I find myself constantly revising my worksheets and tasks to bring more clarity and help the students focus where I want them to. Is this minimally guided or is this guided instruction? I don’t think it’s direct instruction, per se. So am I doing something research-supported or not? Hard to say.
Tuovinen and Sweller (1999) showed that exploration practice (a discovery technique) caused a much larger cognitive load and led to poorer learning than worked-examples practice. The more knowledgeable learners did not experience a negative effect and benefited equally from both types of treatments.
I agree that worked examples are really important! I think after investigating something, there should always be structured note-taking (which can clarify any misconceptions and make sure that all students have arrived at the same framework), worked examples, and then practice.
But one thing I think is that the point of investigations is not necessarily the content at hand, but rather the flexible approach of taking knowledge one already has and building upon it in new ways. That is what we are committing to long-term memory; rather than *what* is investigated it is the process of investigation itself. How often have we seen students stymied by a slight change in the wording of a problem/how it’s presented/application?
We constantly lament that they lack the tools for this — isn’t this what discovery learning attempts to remedy? Of course I, as the teacher, can explain the concept best — distilling its pitfalls and connections and intricacies into an outline format with examples — and that needs to happen as well. I can’t abdicate my role as the content expert in the room (both mathematically and the learning of math). But in my mind, the discovery process isn’t (just) about the content/concept, but about the journey that the student takes applying known knowledge to the just-out-of-reach knowledge.
This little bit was the most fascinating to me:
…the worked-example effect first disappears and then reverses as the learners’ expertise increases. Problem solving only becomes relatively effective when learners are sufficiently experienced so that studying a worked example is, for them, a redundant activity that increases working memory load compared to generating a known solution (Kalyuga, Chandler, Tuovinen, & Sweller, 2001). This phenomenon is an example of the expertise reversal effect (Kalyuga, Ayres, Chandler, & Sweller, 2003). It emphasizes the importance of providing novices in an area with extensive guidance because they do not have sufficient knowledge in long-term memory to prevent unproductive problem-solving search. That guidance can be relaxed only with increased expertise as knowledge in long-term memory can take over from external guidance.
Before I worked in The Netherlands, I worked in the South Bronx in District 7, a really struggling area. When I taught there, I had a really regimented approach. Every day began with a warm-up (I couldn’t bear to call it a “Do Now” as they were trying to get us to do–it was too imperative sounding) that would either review the work from yesterday or prep them for the day’s work with some sort of small discovery learning. Just a short example:
1) 3(x-4) —> ________
2) ______ —> 7x + 35
Then we did notes in a really structured outline format (roman numerals and all), a worked example, and then the students got to work. I prided myself on my super clear explanations and step-by-step instructions. And you know what, I had great results (as measured by the NYS tests and Regents tests, of course, so grain of salt)! I felt really good about how I taught math.
But when I came to The Netherlands, a lot of students chafed at this style (though they all said I explained well and had good results). They didn’t want to take notes. They felt like they already got it and I was forcing them to sit through an explanation that they didn’t need and making them write down things they were never going to look at. I also started to feel like maybe I was holding their hands too much, like I was doing all the mental heavy lifting. So I’ve drifted from this model.
So here are my questions:
Does the kind of instruction you provide depend on the level of the class you have (and I don’t just mean differentiating a bit but the whole approach)? It feels wrong somehow to deny a weaker group the experiential learning…but at the same time, providing them with direct instruction is what helps them grow, according to this research. And does all this point to the idea that tracking is better? Because you are able to provide direct instruction to lower performers, who will benefit most from it, and provide discovery learning to higher performers, who will benefit most from that?
I don’t know the answers to these questions, and I wish I had two parallel classes so I could try out the two different styles (but no, this year I have one class each of 7th, 8th, 9th, 10th, 11th, and 12th, UGH) and see how it plays out. What are your experiences? Do you agree with the article? How do you reconcile the kind of teaching advocated by Elizabeth Green’s Building a Better Teacher and this new study?