Every summer, we hear about the dangers of “summer slide”—kids going to the beach, watching television, or playing video games, all the while forgetting most of what they learned during the school year. The real story, as it turns out, is a lot more complicated, according to a recent article by a researcher in Education Next.
The idea of summer learning loss—of a growing learning gap between students who took summers off, and those who continued to study—was most famously recorded in test scores from Baltimore in the 1980s, and appears to be supported by common sense: If kids spend their summers playing, they’ll fall behind those that spend their time studying. But according to Paul T. von Hippel, a policy professor at the University of Texas at Austin, there are flaws with the research on summer learning loss that should make us question the universal truth of summer slide.
“I used to be a big believer in summer learning loss,” von Hippel explains in the article. “But my belief has been shaken. I’m no longer sure that the average child loses months of skills each year, and I doubt that summer learning loss contributes much to the achievement gap in ninth grade.”
What led him to this conclusion? He and his colleagues tried—and failed—to replicate the original 1980s study that popularized the notion of summer learning loss. In a study published earlier this year, von Hippel found that the testing methods used three decades ago tended to distort student scores. Although students were ranked in the right order, the gaps between those students could shrink or expand—what he calls a “fun-house mirror” effect.
Imagine that two students take a test. In more antiquated scoring systems which simply reflect the ratio of correct answers to the total number of questions, one student might get 30 percent right, while another gets a score of 40 percent—that’s a gap of 10 percentage points. But what if the test asked easier questions? Both students will do better, with the first student scoring 40 percent while the second student scoring 60 percent, thus doubling the gap.
“Depending on what questions you add, you can get any gap that you want,” von Hippel clarified.
In the 1980s, tests were prone to such distortions, so what started as a small achievement gap in first grade blew up by eighth grade, reinforcing the idea that students—especially those from low-income families—fall further and further behind every summer.
But in the mid-’80s, when we switched to modern scoring methods that weighted the difficulty of questions, those distortions vanished and the achievement gap actually shrank as students got older. By then, however, the idea of a summer slide was already entrenched in the public’s mind. Had the original study been conducted a few years later, we’d probably still think of summertime as much-needed downtime and playtime.
None of this means that summer learning is pointless. If a child struggles during the school year, they can use the long break to catch up, von Hippel points out. But the kids who study aren’t getting much further ahead, and the kids who play aren’t falling much further behind. Summers don’t increase the achievement gap, according to von Hippel, who cites research that suggests that the gap is already present at kindergarten and stays about the same as kids get older.
The big takeaway here is that the popular notion of summer slide is based largely on old data, and is probably vastly overstated. While summer learning can help close individual gaps, a closer look at the data suggests that taking a break does not amplify achievement gaps across student populations more generally. The data suggests, instead, that we might save summer for playing and focus our efforts on improving education—including inequities in the system—during the traditional school year.