Silent Statistics: Student-Performance Data Misses the Most Important Outcomes
Your content has been saved!
Go to My Saved Content.Recently, consultants who were reviewing the data systems the California Department of Education uses to track student performance interviewed me. I have had to wrestle with how I feel about the whole process, because unfortunately, I think the emphasis on data has not been the boon to students and educators that was promised. But as someone with a background in science, I have an inherent love of data. More information is always better than less. So how can it be that data could do harm?
What happens if we apply a medical analogy to our schools? We have implemented an evidence-based model whereby we use standardized tests -- in reading and math -- to measure the vital signs of our patients. When the signs are weak -- the scores are low -- we have prescriptive remedies, such as using scripted curricula, reteaching lessons, and spending extra time on core skills in reading and math. These remedies usually have the effect of improving the vital signs, which we take as an indication that the process was successful.
Science Friction
However, disturbing new pieces of data seem to indicate some unintended side effects to our remedies. Part of my job is to work with elementary school teachers on a grant for ways to improve their science instruction. The biggest problem we have had this year is that they simply do not have time to teach science.
The teachers in low-scoring Program Improvement schools have the hardest time: They must prepare weekly schedules listing how much time they intend to spend on each subject, and they are expected to average two and a half hours of reading and writing instruction each day, plus an hour and a half of math.
When you factor in an early dismissal on Wednesdays, this schedule leaves them about thirty minutes a week for science. But according to many of these teachers, they're likely to lose even that science time to reteaching a writing skill, so they may end up teaching no science for many weeks. Thus, the first detrimental side effect is the loss of time for subjects the tests do not emphasize: science, history, art, and physical education. Our data systems do not measure performance in these subjects very much, and by ignoring them, we are causing their systematic decline in our schools.
But even when we do teach science, the remedies for improving test scores can negatively affect it. This week, teachers at a meeting told me that their students were enjoying their science activities, but when they asked the students to read the science textbook, the kids rebelled. They associated reading in a textbook with the scripted reading curriculum, which they hate.
For these students, reading has become an onerous task, rather than a joyful thing. As the son of a bookseller and as an avid reader, that gives me great pause. If we teach reading, but in the process rob it of any joy, haven't we done more harm than good? And because the love of reading is not measured by any test, how do we even know what damage the scripted curriculum has done?
The Dropout Disconnect
This issue got me thinking about other ways our current data systems may be missing crucial pieces of the puzzle.
The biggest missing piece is the dropout rate, which seems to be climbing, especially in urban schools where we have most heavily used these remedies. The trouble here is that when the least successful students leave the schools, the average test scores actually rise. A student who drops out is akin to a patient in a hospital dying rather than being healed. But although hospitals usually trust doctors to provide diagnoses that go beyond the vital signs and know why patients die, our dropouts leave us quietly, and we are left with scant evidence to explain their departure.
The research on why these students are dropping out is limited. I have some ideas, however, based on my eighteen years of teaching in an Oakland middle school. I think students drop out when they are unable to feel successful in school and when they cannot connect success in school to a vision of their own future. They drop out when they are bored because they have become disengaged from the pursuit of knowledge.
How does that disconnection occur -- and how can we repair it? Has the growing use of scripted curricula and the emphasis on test scores actually increased this alienation? This, it seems to me, is the most urgent question we face. Like the love of reading, this is not something measured directly by any test. Because there are no systems to measure student engagement, and because we don't ask or trust teachers to diagnose the reasons kids drop out, all we have is the indirect evidence for the rising rate.
In medicine, systematic efforts are in place to consider all the possible side effects of remedies. In our schools, however, we seem to have fixated on a measurement system and implemented remedies with little regard for possible side effects. We are watching those test scores like hawks, and dosing our students with the "proven" remedies. But the teachers with whom I work are reporting a lot of side effects not reflected by the test scores.
The ultimate side effect, the student who drops out, is completely removed from our data set and thus not even measured. Do half of our patients have to actually die before we realize the medicine is not working?
Please share your thoughts.