George Lucas Educational Foundation
Maker Education

Assessing Learning in Maker Education

A look at how maker education is assessed—and how assessment is evolving to measure more than just content.

August 30, 2018
©Shutterstock/David Gilder

On the surface, maker education looks like a lot of fun—young people tinker with materials, take things apart, make things light up, design and build things, etc. It’s messy, a little scattered.

Dig a bit deeper, and those actions of play reveal quite a lot of thinking, processing, and meaning-making. Manipulating materials helps us understand how to best use them, how they can be altered or used differently; taking things apart or making things light up allows us to examine the made world, helps us form connections between what we see and what’s just below, and presents the possibility of discovering or creating something anew; and designing and building asks us to understand the products, structures, voices, and systems around us, find an opportunity to improve or change, test our assumptions, and more.

Along the way, students learn to use scissors, screwdrivers, software, sewing needles, and saws. They share their ideas, findings, and mistakes. They apply their new knowledge to new creations. They read and comprehend instruction manuals and write out ideas. They discover expertise and interest.

How Do We Assess the Learning Outcomes?

The joy and engagement inherent in making are undeniable, but we also want to be clear about the overall learning outcomes of maker education, which is where assessment plays a crucial role.

Assessing maker education allows us to point to evidence and practices that support learning to be more meaningful. It also allows teaching to be more youth-centered, and young people to have more agency in their learning. And assessing maker education asks us to consider a student’s growth and evolution over time.

We know that maker education looks wildly different from one spot to the next, and currently, the assessment of maker education reflects that as well. Some educators don’t assess their maker-oriented activities and projects at all. Others attempt to put a grade to the content knowledge developed or the technical skills but don’t explicitly grade the context (the project, the tinkering, the collaboration). Some look solely at the finished products while others focus on the process of learning, in addition to or instead of the final finished product (asking questions such as “Were there observable actions that indicated collaboration, iteration, etc?”).

Soft skills—such as agency, problem-solving, collaboration, and creativity—are a critical outcome of maker education, and there are models for identifying and thinking about them. For example, the Exploratorium Tinkering Studio has developed the Learning Dimensions of Making and Tinkering Framework, which identifies dimensions such as “Initiative and Intentionality” and “Creativity and Self-Expression.” And Children’s Museum of Pittsburgh developed the Learning Practices of Making, which include such practices as “Inquire,” “Seek and Share Resources,” and “Hack and Repurpose.”

To be clear, a drive toward focusing on these soft skills doesn’t mean that content knowledge or conceptual understanding is excluded—it just means that all of the skills are equally important and form a foundation and context for understanding and applying content knowledge. As gatekeepers to colleges and careers open up doors and “are looking for more effective ways to recognize an array of student accomplishments,” as a report by the Learning Policy Institute notes, we see a rising acknowledgement that deep learning and thinking skills matter more than memorized facts.

A number of ongoing research projects are looking at what aspects of the learning in school-based maker environments should be assessed, and how to assess them. In one, Agency by Design, 30 educators from 20 organizations come together once a month to discuss the assessment and documentation of maker-based learning experiences. They focus on what they value in student learning and engagement, and what constitutes evidence for what they value.

In the project Beyond Rubrics: Moving Towards Embedded Assessment in Maker Education—led by MIT’s Teaching Systems Lab in partnership with my organization, Maker Ed—we like to think of assessment as ongoing, performance-based, multidimensional, flexible, playful, and embedded. We’re envisioning new  assessment tools that go beyond rubrics and can be embedded within maker-centered classrooms. These tools, currently in development, allow students and teachers to seamlessly collect evidence related to competencies and skills such as agency, troubleshooting, and risk-taking. We see maker projects, as well as teachers and students, as critical components of embedded assessment too.

The tools we’re developing allow students to self-assess and reflect on their work at numerous moments and support using makerspace processes and products—not quizzes or tests—as artifacts to assess. Students can support and assess one another’s efforts, skills, and contributions, and teachers can bake in opportunities for developing agency, risk-taking, creativity, and more. Formative and summative assessments go hand-in-hand; the sum of many formative assessments—which capture moments in time—can and should tell the story of a learner’s abilities and growth.

We’re asking teachers to articulate, for instance, what collaboration looks like and sounds like. Teachers are always attuned to the behaviors of their students and commonly already recognize innately and intuitively moments of learning and development. They also recognize students’ change over time.

Rethinking How We Record Learning

So how does the shift in how we value learning, what we define as learning, and how we assess learning link up with the system of test scores, grades, and transcripts?

They don’t have to be mutually exclusive. Transcripts can look different than they currently do, and there are efforts, such as the Mastery Transcript Consortium, to redesign and rethink them. The use of grades, test scores, and traditional rubrics can be redesigned to reflect the learning of the whole child, and performance-based assessments such as capstone projects and portfolios can connect to traditional transcripts and supplement them.

We’re eagerly revisiting and updating assessment to be as open-ended, participatory, authentic, agentic, and playful as maker education itself.

Share This Story

  • email icon

Filed Under

  • Maker Education
  • Assessment

Follow Edutopia

  • facebook icon
  • twitter icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use

George Lucas Educational Foundation

Edutopia is a free source of information, inspiration, and practical strategies for learning and teaching in preK-12 education. We are published by the George Lucas Educational Foundation, a nonprofit, nonpartisan organization.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.