The Classroom's Three-Legged Stool
Three essential activities occur in a learning environment: curriculum, instruction, and assessment. Each of these three legs of the classroom stool must be strong and robust on its own for the stool to be well balanced. Curriculum, of course, refers to the educational goals of what is to be taught. These days, it is often paired with standards to communicate school district and state consensus on curriculum goals. Instruction refers to the role of the teacher and instructional tools and strategies in delivering the curriculum. Unfortunately, the standards movement has sometimes led to a standardization of instruction, a more-than-semantic confusion.
Assessment remains the least understood of the three concepts, primarily because history and policy limit our understanding of its purposes and possibilities. The default understanding of assessment equates it with tests, especially the multiple-choice kind, which all of us have taken. Perhaps ironically, this common experience has not led us to a deeper investigation of the pros and cons of testing.
In a broader sense, assessment encompasses activities related to understanding how well the curriculum and its instructional strategies result in learning outcomes. A primary purpose of assessment is to give ongoing feedback to teachers and learners on how to improve learning. If we truly believe learners learn in different ways and at different paces, curriculum, instruction, and assessment need to be much more flexible than the wooden approach of "Read the chapter, listen to the lecture, and answer the questions at the back of the book. The quiz will be on Friday."
GLEF and many other groups are devoted to changing the nature of curriculum and instruction. We'd like to see the curriculum transformed into a project-based curriculum, as opposed to the twentieth-century model of textbook-and-lecture-based curriculum. The implications for instruction are clear: The teacher becomes more a manager of students' projects and learning processes than a direct instructor. Students themselves take on more of the responsibility of instructing others as they work in project-based teams.
Testing ≠ Assessment
Assessment remains somewhat of a mystery, because we have few models of assessment systems that match with project-based curricula and student-centered instruction. Our GLEF Agenda advocates for a full spectrum view of assessment, what we call comprehensive assessment. This vacuum around improving assessment limits development of better curriculum and instruction, because assessment drives instruction. What gets measured gets taught. And, as Einstein indirectly observed, it's become too convenient to confine assessment to counting test scores.
In this No Child Left Behind environment, many districts are allowing and, in some cases, directing teachers to abandon more creative teaching and simply teach to the test. The NCLB experience reveals the urgent need to develop assessment systems to give policy makers districtwide and statewide data on which to base policy decisions but also to provide teachers, students, and parents the specific "local" data they require to constantly improve classroom performance.
Learning = Driving?
What would a system that combines the ease of administering hundred of thousands of multiple-choice tests with the higher costs of specific feedback on performance look like? Well, we're all familiar with one such assessment system: the test one takes to obtain a driver's license. As is the case in most states, the California driving exam has both a written component and a road component. The written test comes first, to make sure prospective drivers are familiar with the rules of the road such as signs, signals, and safe driving practices. However, the state recognizes that just knowing the rules doesn't make for a good driver. Would any of us get in a car with a sixteen-year-old who had passed only the written test?
In driving, we recognize that performance is what counts. The act of driving assumes knowledge of its rules, but the application of those rules in actual performance is where the rubber really meets the road. The same principle applies in other areas of human endeavor, notably sports and the arts. Athletes and actors know the basic rules of their professions, but it's how those rules are applied in performance that matters. Professional football teams and acting companies have a common form of assessment when hiring an athlete or an actor. It's called a tryout or an audition, in which athletes and actors are asked to perform under real-life circumstances, including interaction with their teammates or fellow actors.
Note one important point: The assessment of the activity looks very much like the activity itself. To apply a term used by some assessment experts, assessment becomes authentic. The difference between instruction and the assessment of instruction vanishes.
Direct analogies to learning in schools exist. Just because a student can recite rules of grammar or the names and dates of Civil War battles or the elements in the Periodic Table doesn't mean they can actually write well, understand the historic significance of those battles, or discuss how carbon or chlorine relate to our daily lives. (Generational note: My daughter, a college sophomore, tells me that the Periodic Table we struggled so mightily to memorize in high school is routinely distributed in her chemistry exams as a reference, a sign that her professor understands that memorization does not equal learning.)
We urgently need assessment systems that measure student understanding and performance at these much deeper levels. And we need systems that can assess how well students can work together with others, the social intelligence skill that psychologist Daniel Goleman has written about and that so many companies are asking for.
"Appropriate Assessments for Reinvigorating Science Education," by Bruce Alberts, former president of the National Academy of Sciences, who has returned as a professor at the University of California at San Francisco, is one of the best articles we've published on the nature of assessment; it cites one interesting example of authentic assessment from Maryland's state elementary science test.
I will devote future columns to describing new approaches to assessment. If your school, district, or state is involved with new forms of assessment, or if you are a researcher developing such systems, I'd like to hear from you. Write to me at firstname.lastname@example.org.