Researchers found that authentic work, such as the architectural project completed by students in Eeva Reeder's geometry class, yielded higher test scores for students.Credit: Edutopia
Academic research points to the benefits -- and identifies ongoing challenges -- of implementing performance assessments in K-12 classrooms. Studies also identify the impact technology can have and is having on both classroom and large-scale assessments. Following are synopses of a sampling of studies on K-12 assessment.
Authentic Work Yields Higher Test Scores
A three-year study of teaching and learning in more than 400 third-, sixth-, and eighth-grade classrooms in Chicago found that when students were given writing and mathematics assignments calling for more authentic work, they performed better on tests used to judge basic skills. (Researchers considered authentic work to be assignments that called for students "to formulate problems, to organize their knowledge and experiences in new ways to solve them, to test their ideas with other students, and to express themselves using elaborate statements, both orally and in writing.")
According to the report, "Authentic Intellectual Work and Standardized Tests: Conflict or Coexistence?," published by the Consortium on Chicago School Research, students whose teachers routinely gave "authentic intellectual assignments" increased their scores on the Iowa Test of Basic Skills (a widely used standardized test) by 20 percent more than the average increase in scores nationally.
Researchers identified a "consistent, positive relationship between student exposure to high-quality intellectual assignments and students' learning gains on the test -- even after controlling for race, socioeconomic class, gender, and prior achievement differences among classrooms." More authentic work, say the report authors, benefits both low- and high-achieving students. By challenging students to explore concepts and ideas at a deep level, authentic assignments help students make information their own and enable them to apply this knowledge and understanding when taking conventional tests.
Adapting Tests to Students' Abilities
Authors of a RAND report, "Using Web-Based Testing for Large-Scale Assessments," identify several key advantages of having students take standardized tests via the Web. (Download a PDF of the report.) These advantages include greater flexibility at a lower cost than traditional testing, quicker feedback for students, parents, and teachers regarding student performance (typically, test results are not available until months after students have taken standardized tests), and considerable time savings over traditional methods.
Currently, several large-scale tests are administered via computer, including the Medical Licensing Exam, the Graduate Record Exam (GRE), and the Graduate Management Admissions Test (GMAT). To date, however, the use of computer-based standardized tests is limited in K-12 schools.
One of the most significant advantages to automating standardized test taking, say the authors, is the ability to customize tests for every student on the basis of his or her answers to questions. Students who answer initial (relatively easy) questions correctly would quickly progress to more complex questions. These computerized adaptive testing systems (CATS) could shorten test-taking time by eliminating the need for students to answer every question on a test. Questions, suggest the authors, could be stored on a central school district server and then downloaded on demand by schools. Tests could therefore be administered on an as-needed basis rather than at a single, set time each year.
Because test results could be tabulated almost instantaneously, teachers could use results to influence and inform their teaching strategies for individual students -- a marked improvement over the current scenario, in which students have often moved on to the next grade before test results are made available. Researchers are currently investigating the use of CATS as a means of broadening the tests themselves, including "constructed response items that require students to produce, rather than just select their answers."
Despite the promise of CATS, the report authors identify several significant issues requiring further research. These include an analysis of possible errors or unexpected consequences associated with tests administered by computer and the possibility that students might deliberately answer early (and easy) questions incorrectly and then go back and change their answers later -- after the system had already delivered up a relatively easy battery of questions. The authors also raise technical issues concerning the size and availability of questions in the test bank and schools' access to technology.
The Medium Matters
Researchers from Boston College's Center for the Study of Testing, Evaluation, and Educational Policy, found paper-and-pencil writing tests did not accurately measure the abilities of middle school students who were accustomed to doing most of their writing on a computer. Their findings are detailed in the report, "Testing Writing on Computers: An Experiment Comparing Student Performance on Tests Conducted via Computer and via Paper-and-Pencil."
After noticing a disturbing decline in writing skills among middle school (sixth-, seventh-, and eighth-grade) students as measured in a battery of performance assessments, teachers at The Advanced Learning Laboratory School sought the assistance of researchers to determine what effect, if any, the testing format might have on student performance. In the spring of 1995, researchers compared test results of two groups of students. One group took paper-and-pencil tests. A second group took computer-administered versions of select test components. All the students frequently used the computer for writing and presentation projects.
The results are intriguing and raise questions about whether existing testing methods for essay and extended-answer questions are appropriate, given students' increased use of computers for composition. Students who performed the writing assessment on the computer tended to write almost twice as much and were more apt to organize their responses into more paragraphs. Computer-using test takers also performed considerably better on the open-ended components of the test than did their paper-and-pencil counterparts.
"This suggests that we should exercise considerable caution in making inferences about student abilities based on paper-and-pencil, handwritten tests as students gain more familiarity with writing via computers," note the study authors.
Portfolios: For Assessment and Instruction
Researchers from RAND studying the first year of Vermont's implementation of portfolio assessments for fourth and eighth graders found that the development of portfolios (work was selected by students with input from classroom teachers) had several positive educational outcomes: Students and teachers were more enthusiastic and had a more positive attitude about learning, teachers devoted "substantially more attention" to problem solving and communication (two areas represented by portfolios), students spent more time working in small groups or in pairs, and teachers felt the portfolios afforded them a new perspective on student work. Their work is summarized in the report "Can Portfolios Assess Student Performance and Influence Instruction? The 1991-92 Vermont Experience." (Download a PDF of the report.)
In addition to noting the many benefits of using portfolios for assessment and instruction, researchers also identify some significant questions and issues to be addressed. Although all teachers participated in workshops on creating and scoring portfolios, surveys of teachers and analysis of the portfolios and accompanying scores pointed to considerable confusion and inconsistencies in the way the portfolios were implemented. Some teachers, for example, allowed students to revise their work before including it in their portfolio; others did not. The type of work included in a student portfolio also varied considerably from one class to the next. Finally, rater reliability was very low.
According to the report, "The percentage of cases in which raters agreed on a score was generally not much higher than expected by chance." Researchers expressed "tempered optimism" about the role portfolios can play in a statewide system of assessment. They note that standardization of student portfolios (that is, greater guidance about the type of work to be included) would improve comparisons across schools and districts but may hamper the use of the portfolios as a teaching and learning tool.
Access to Technology is One -- But Not the Only -- Factor in Achievement Gains
Researchers for the Center for Children and Technology studying achievement gains (as measured by performance on standardized tests) of middle school students in Union City, New Jersey, found that widespread access to and use of technology was one factor in student performance gains. Their findings are documented in the report, "The Union City Story: Education Reform and Technology -- Students' Performance on Standardized Tests."
In the report, researchers analyzed the performance of two groups of students: those with "sustained access to network technology at home and at school" and those with more limited, school-only access. They found that writing is one area in which "deep and sustained" access to technology has made a difference in student academic performance. At the seventh-, eighth-, and ninth-grade level, students who have access to technology at home and at school did better than their classmates on the writing portion of state tests.
Researchers cite both "contextual" and "technology-facilitated" factors that impact student performance. Contextual factors include: enthusiastic and dedicated staff; high expectations for student performance; and increased parent involvement. Technology-facilitated factors include: increased communication between parents, teachers, and students; increased collaboration between teachers; additional opportunities to write and edit; and additional opportunities to create multimedia authoring projects.