“It Works” and Other Myths of the Science of Reading Era
By Dr. Timothy Shanahan
Recently, I wrote about the science of reading. I explained how I thought the term should be defined and described the kind of research needed to prescribe instruction.
Today I thought I’d put some meat on the bone; adding some details that might help readers to grasp the implications of a scientific or research-based approach to reading.
What does it mean when someone says an approach to reading instruction “works”?
The term “it works” has gnawed at me for more than fifty years! I remember as a teacher how certain activities or approaches grabbed me. They just seemed right. Then I’d try them out in my classroom and judge some to work, and others not so much.
But why?
What was it that led me to believe some of them “worked” and some didn’t?
It puzzled me even then.
Teachers, administrators, and researchers seem to have different notions of “what works.”
Teachers, I think, depend heavily on student response. If an activity engages the kids, we see it as hopeful. We give credence to whether an activity elicits groans or a buzz of activity.
When I do a classroom demonstration and students say they liked the activity and want to do more, most likely I’ve won that teacher over.
Teachers recognize that learning requires engagement, so when an activity pulls kids in, they’re convinced that it’s a good idea.
That satisfaction is sometimes denigrated because of its potential vapidity. Let’s face it. Bozo the Clown engages kids, too, but with how much learning?
What those complaints fail to recognize is that the teacher already has bought into the the pedagogical value of the activity. They assume it is effective. Student engagement is like gaining a third Michelin star.
What about administrators?
Their needs are different. To them, “it works,” is more about adult acceptance. If a program is adopted, the materials shipments arrive as promised, and neither teachers nor parents complain, it works!
And, to researchers?
To them, it means there has been an experimental study that compared that approach with some other and found it to be superior in terms of fostering learning.
If a method does no better than “business as usual” classroom practice, then it doesn’t work (which, confusingly, isn’t entirely correct, since the difference isn’t that everybody in one group learned and nobody in the other did).
I’ve worn all those hats – teacher, administrator, researcher – and I prefer the last one. The reason? Because it’s the only one that explicitly bases the judgment on student learning.
Will we accomplish higher achievement if we follow research and make our teaching consistent with the science?
That’s the basic idea, but even that doesn’t appear to be well understood.
I think we tend to get misled by medical science, particularly pharmacology.
New drugs are studied so thoroughly it’s possible for scientists to say that a particular nostrum will provide benefit 94% of the time and that 28% of patients will probably suffer some unfortunate side effect.
When I tell you that the research shows that a particular kind of instruction works (i.e., it led to more learning), I can’t tell you how likely it is that you will be able to make it work, too.
Our education studies reveal whether someone has managed to make an approach successful.
Our evidence indicates possibility, not certainty.
When we encourage you to teach like it was done in the studies, we are saying, “if they made it work, you may be able to make it work, too.”
That’s why I’m such a fan of multiple studies.
The more times other people have made an approach work under varied circumstances, the more likely you’ll be able to find a way to make it work as well.
If you show me one such study, it seems possible I could match their success. Show me 38, and it seems even more likely that I could pull it off.
That nuance highlights an important point: Our instructional methods don’t have automatic effects. We, as teachers, make these methods work.
Lackadaisical implementation of instruction is never likely to have good results. The teacher who thinks passive implementation of a science-based program is what works is in for a sad awakening.
I assure you that in the studies, everyone worked hard to make sure there were learning payoffs for the kids. That’s part of what made it work better than the business-as-usual approach.
That point is too often muffled by our rhetoric around science-based reading. But teacher buy-in, teacher effort, and teacher desire to see a program work for the kids are all ingredients in success.
I don’t get it, I’m hearing that some approach (e.g., 3-cueing) is harmful, and, yet I know of research-based programs that teach it. Does that make any sense?
You’re right about 3-cueing being part of some successful programs. But that doesn’t mean it’s a good idea. Instructional programs usually include multiple components. Studies of them tell if the program has been effective, but they usually say little about the various components that are integral to the program.
Without a direct test of the individual components, there are three possibilities: (1) a component may be an active ingredient, one of the reasons for the success; or (2) it’s a neutral ingredient — drop it and kids would do as well; or (3) it’s hurtful, the instruction would be even more effective without it.
Logically, 3-cueing makes no sense. It emphasizes behaviors good readers eschew.
That said, I know of no research that has evaluated 3-cueing specifically.
Claims that it’s harmful (beyond being a likely time waster) are, for the time being, overstatements. These claims rely on logic, not data.
The problem that you identify is a common one – people will tell you that multisensory instruction, a sole focus on decodable texts, advanced phonemic awareness, more social studies lessons, word walls, sound walls, and so on are all certain roads to improved achievement. Each is part of at least one successful program or another. But none have been evaluated directly. The truth is, we really don’t know if they have any value at all.
They might provide benefits, but that isn’t the same thing as knowing that they have done so before.
Our district has adopted new programs and instructional routines based on science. But our kids aren’t doing any better than before. Does that make any sense?
No, that makes no sense at all. The purpose of any kind of educational reform – including science-based reform – is to increase learning. The whole point is higher average reading scores or a reduction in the numbers of struggling students.
Whoever’s in charge should take this lack of success seriously and should be asking – and finding answers – to the following questions?
- Were these changes really based on the science and what does that mean?
Administrators often make choices based on minimal information. It is better to vet these things before adopting them, but in a case like this one, it is never too late to find out if the reform scheme was really consistent with the science.
- How has the amount of reading instruction to students changed?
Some approaches work better than others because they have a bigger footprint. They provide a greater amount of teaching than business-as-usual approaches. Adopting such programs without making the schedule changes to facilitate their implantation will likely undermine potential success. Are kids getting more instruction, less instruction, or about the same as before?
- How is the amount of reading instruction apportioned among phonemic awareness, phonics, text reading fluency, reading comprehension strategies, written language ability, and writing?
Often the adoption of new programs or reform efforts aimed at a particular piece of the puzzle lead to greater attention to certain abilities, but to diminished attention to other key parts of literacy. Make sure that you aren’t trading more phonics for less fluency work, or more vocabulary for less comprehension. You want to make sure that all components of reading are receiving adequate attention – not going overboard with some and neglecting others.
- To what extent are teachers using the programs?
Compliance matters in program implementation. The adage that “teachers can do whatever they want when the door is closed” highlights one of the biggest roadblocks to making such efforts work. You need to make sure you have sufficient buy in for the men and women who do the daily teaching. You bought a new program or set new instructional policies. Are they being used or followed?
- How well prepared are the teachers to provide the required instruction?
Program adoption requires a lot more than issuing a policy proclamation. Research shows that program implementation supported by substantial professional development is much more successful than just buying a program. You need to make sure that you’ve built the capacity for success and not just expected magic to happen.
References
[1] National Research Council. (2002). Scientific research in education. Washington, DC: National Academy Press.
[2] Shanahan, T. (2020). What constitutes a science of reading instruction. Reading Research Quarterly, 55(S1), S235-S247.
[3] Stanovich, P. J. , & Stanovich, K. E. (2003). Using research and reason in education. Washington, DC: National Institute of Literacy.
This article was originally published on shanahanonliteracy.com/. Click here to read the original post.
About the Author:
Timothy Shanahan is Distinguished Professor Emeritus at the University of Illinois at Chicago where he was Founding Director of the UIC Center for Literacy. Previously, he was director of reading for the Chicago Public Schools. He is author/editor of more than 300 publications on literacy education. His research emphasizes the improvement of reading achievement, teaching reading with challenging text, reading-writing relationships, the and disciplinary literacy.
Tim is past president of the International Literacy Association. He served as a member of the Advisory Board of the National Institute for Literacy under Presidents George W. Bush and Barack Obama, and he helped lead the National Reading Panel, convened at the request of Congress to evaluate research on the teaching reading, a major influence on reading education. He chaired two other federal research review panels: the National Literacy Panel for Language Minority Children and Youth, and the National Early Literacy Panel, and helped write the Common Core State Standards.
He was inducted to the Reading Hall of Fame in 2007, and is a former first-grade teacher.
About The InvestiGator Club®:
The InvestiGator Club® family of early childhood resources delivers play-based learning programs for children from birth through transitional kindergarten. Standards-based curricula are approved and adopted in states throughout the U.S., including Texas, Florida, Arkansas, Georgia, Virginia, Maryland, Louisiana, Illinois, Minnesota, South Carolina, North Carolina, Rhode Island, Delaware, and more. The delightful InvestiGator Club® characters engage young children in developmentally appropriate experiences that bring joyful learning to classrooms, childcare providers, and families. For more information, email Robert-Leslie Publishing, The Early Childhood Company™, or call 773-935-8358.