Today's children are in school during an exciting nexus of education, technology, and neuroscience research. The surge of neuroscience research, illuminating types of experiences that best support joyful and successful learning, has merged with the rising tide of technological advances. The resulting abundance of computer learning games, websites, online programs, apps, and other edtech products is stunning, but also challenges us to evaluate what are the best products for our students' needs.
In the booming business of edtech "brain-booster" products, where claims are made without any formal guidelines, caution is required when assessing the research pronouncements in product literature and on websites. Is it really "proven by brain research" and "confirmed by evidence-based outcomes"? If so, what does that mean, exactly?
An Edutopia community member recently asked:
How can educators get reliable, centralized information in regard to computer programs' positive effects on students' achievement? All publishers tout their programs as the very best, making it difficult to really find what could be a best program. For me, Edutopia is like Consumer Reports.org. I trust what I find there to help me evaluate best programs.
This request is emblematic of the frequent questions I am asked about what claims are really supported by valid neuroscience research. There has never been a greater need for unbiased, expert, Consumer Reports style evaluations of products and research claims. The goal of this post is to offer tips and resources to help you better understand research validity and related product claims.
The Need for Common Research Standards
The research articles in many medical journals, for example, are systematically "precertified" Consumer Reports style, because for publication they must meet guidelines of the medical model for research validity. There is additional prepublication evaluation of any conclusions or recommendations attributed to the research data by established experts in the field.
Education lacks the same inherent structures that promote similar scrutiny. Until there are professional education standards for what constitutes "proof" of educational effectiveness, we need to increase our own critical consumer skills.
To better make sense of what we should be looking for, let's review three reliable product evaluation resources. You'll see how each incorporates medical model guidelines for evaluations of product research and claims about consistency, claim control, and quackery (or expertise). Examining the criteria used by Consumer Reports, Edutopia, and Graphite will build your consumer savvy for analyzing edtech product research claims and the accuracy of product reviews.
Useful Examples of Credible Validity Models
Consumer Reports conducts independent research analysis of products in their laboratories and in typical user environments. Their formalized systems of comparative product analysis are consistent with medical model research guidelines.
The George Lucas Educational Foundation evaluates educational research studies for reproducibility and an evidence base while conducting its own independent research to increase understanding of effective tools and practices. The Foundation disseminates insights from this validated research through Edutopia, a web resource providing information about "what works in education," as well as a forum for ongoing discussions and input from educators. This communication cycles back to the Foundation, igniting further investigations into what educators report experiencing through the lens of their own classrooms.
Like Edutopia, Graphite from Common Sense Education is an independent, nonprofit web source of comparative analysis about apps, computer learning games, and websites. Graphite incorporates components of the medical model in their evaluation of research, using a consistent system for rating specific characteristics relevant to each type of edtech product.
The following two resources, while not included in this discussion of research validity, are excellent models of stringent analysis of research claims regarding educational theory, strategies, and products. How, exactly?
British Education Index
This site requires that research or analysis be authored by individuals with professional standing in the specific field of the research. Studies listed in the British Education Index must follow the quality criteria of that professional organization. Excluded are articles or websites whose primary purpose is to advertise events, products, courses, or publications.
What Works Clearinghouse
The U.S. Department of Education's What Works Clearinghouse evaluates education research, theory, products, and claims using medical model type criteria to analyze the validity of each study used. They employ a consistent rating scale of research validity to determine how much weight each study has on their independent conclusions.
Teachers Are Experts, Too
Without enforced standards, anyone can claim product research expertise regardless of bias, commercial ties, or appropriate background knowledge. The best way to avoid being deceived by unsupported claims of expertise is by looking for documentation of expertise in the science domain appropriate to the product research being evaluated.
While this post has been about verifying the background experience, education, or training of anyone claiming expertise about product research, of course it's not that simple. It's not just academic experts who are qualified and valuable resources about edtech products. We learn so much from teachers' shared classroom experiences with, adaptations of, or disappointments about the edtech tools they use. When you have an experience, insight, or idea, share it through social media, and your local professional learning networks.
I hope you will also contribute your suggestions here about evaluating edtech product research claims. We can build our community of critical consumers, as well as help define future professional guidelines, by sharing experiences about misleading research claims for edtech products.