Let’s say a student builds a trebuchet—a siege catapult—for a makerspace or a science classroom, and it fails in functionality. Perhaps the support structure is too weak, and it snaps and breaks as the student tries to use it to catapult a projectile. The prototype has failed, but the student still might understand the physics of simple machines that undergirds his or her project.
When we teach students iterative design approaches to project-based learning, or—better yet—ways to apply Mitch Resnick’s creative learning spiral, how do we assess students’ content and skill mastery, particularly when lots of prototyping, iterating, and creativity are embedded in the process?
One approach being explored by researchers is playful assessments, which both measure student understanding and honor the iterative design process used in creating projects.
Embedding Stealth Assessments in Projects
Recently, I spoke with Stephanie Chang, director of programs at the nonprofit Maker Ed, about approaches to assessments that involve creative work. She suggested a game-like approach. After all, games have embedded assessments that gauge player mastery throughout the learning experience. (All games are learning games—you have to learn the game’s system to navigate through it.)
Good games don’t pre- and post-test players, as teachers might; instead, games seamlessly assess as people play them. Val Shute, a professor of education, calls this stealth assessment. “Teachers can design assessments to do just that,” Chang suggested. “We often try to capture a moment in time, and that’s not always useful.”
Whether student work is digital, as with Minecraft or Scratch, or is made with arts-and-crafts materials such as hot glue and cardboard, or involves a combination of both, as with physical computing (e.g., Makey Makey, littleBits, or Micro:bits kits), the assessment question should not focus solely on the artifact a student produces for a particular (and often arbitrary) due date.
“Look at the process it took for [the student] to get there as a learner, and how he or she reflected on that experience,” Chang recommended.
In other words, ask students to reflect about how and why they made their design decisions. Processes of reflection create opportunities for students to think about their thinking, a hallmark of good constructionist learning. Aside from written reflections, students can also document their work using digital portfolio tools such as Seesaw or Flipgrid. (For more ideas, check out the free online course Digital Portfolios with Maker Ed at KQED Teach.)
Make Assessments Playful
Like a good game, assessment itself can be playful. That’s not just my opinion: Research on game-based assessment has been around for a few years now.
“Assessment should start with a sense of [student] agency, and should include student participation,” Yoon Jeon (YJ) Kim, a research scientist at the MIT Teaching Systems Lab, explained. “It’s like good game design, where there are interesting problems, and different pathways within it. And there is the sense of competence right away.”
Kim suggested embedding project directions in rubrics and encouraging students to use the rubrics as project instructions. Rubrics should be designed so that students can record their learning accurately—they should be leveled or scaffolded, like games, to accommodate differences in skill or content complexity among students. “We can apply playful learning in the design of assessments to provide an engaging experience, rather than one of anxiety,” Kim continued. “Learners can then see multiple entry points and visible paths that each lead to competencies and skills.”
This might all sound more theoretical than practical, so Kim—along with Louisa Rosenheck, a research scientist from the MIT Education Arcade—created a game-like activity for teachers called MetaRubric. A free printable download, MetaRubric is a playful learning experience about making rubrics. You just need the deck of cards and small groups of three to five people to get started.
The MetaRubric cards guide the action. First participants are asked to draw a “good” movie poster. (The cards intentionally leave that phrase ambiguous—does it refer to good movies or good posters?—so teachers share the experience of how it feels to have unclear expectations.)
“These drawings serve as the basis for the subsequent rubric construction, and set players up to have the experience of assessing with a rubric and being assessed by one,” Kim explained—just the experience students have when they first assess themselves and then are assessed by a teacher.
Players next list criteria for what they think a good movie poster should have, identifying the elements and skills they deem most important. “From there they are asked to share and discuss their criteria, in order to come up with one group rubric with a set of criteria they all agree on.” Finally, they make a rubric for rubrics (hence the meta in MetaRubric).
Playing MetaRubric together, rather than designing rubrics in isolation, gives teachers opportunities to share ways to iterate and improve on praxis. “An outcome from playing MetaRubric may not be the perfect rubric,” Kim continued. “But the perfect outcome may be the conversation about how to playfully assess.”
Aside from professional development training, or settings like pre-service education courses, experienced teachers can create playful assessments together, too. Students should also get involved, as they work on open-ended projects. “Students should be part of the broader assessment conversation,” Kim said. “Then they talk about what they think is important when demonstrating success. Assessment needs to be participatory. And playful. It’s about a sense of agency and mutually understood communicative values.”