George Lucas Educational Foundation
Assessment

6 Key Questions to Build Better Rubrics

A framework for creating assessment tools that clearly communicate assignment expectations and encourage high-quality work.

October 10, 2022
Edwin Tan / iStock

For decades, I’ve supported teachers in developing problem-based performance tasks and project-based learning assessments. One thing I’ve learned doing this work is that while the overall quality and rigor of most performance assessments has improved, many rubrics still use negative, subjective, and kid-unfriendly language, especially at lower performance levels.

High-quality rubrics clearly describe agreed-upon learning expectations so that teachers can make learning visible to students, frame actionable feedback (uncovering student thinking to provide feedback that students can immediately act on) to advance their learning, and accurately evaluate student progress over time. Likewise, students can use well-constructed rubrics to clarify expectations for learning, monitor and reflect on their progress, and provide feedback to peers on the quality of their work.

Focus on 6 Key Questions to Build a Better Rubric

In order to understand how to improve rubric quality, we start by breaking down performance task(s) to be assessed, identifying the learning evidence demonstrated with a product or performance. For example, let’s analyze a sample task given to students in which they are asked to research a “giant of science” and present their learning orally and in written form (for which they could choose to write an article, museum brochure, or infographic).

Let’s say the instructions require students to do these things:

  • Use multiple sources to describe the discovery and analyze its significance (past to present)
  • Include biographical information about the scientist(s)
  • Include a prop or visual symbolizing an aspect of the discovery
  • Finally, share your reflections on what it means to be a “giant” in any field.

Now, let’s work through six research-based questions to examine rubric quality, evaluating criteria aligned with the task and language of rubric descriptors.

Question 1

Do the performance task and rubric criteria pass the chocolate chip cookie test—assessing both basic and deeper learning?

It’s important that performance assessments aren’t limited to applying basic skills in routine tasks. They should require students to integrate multiple skills and concepts in complex tasks. Because of that, it’s necessary for the rubric to cover a range of criteria aligned with task expectations.

Most rubrics typically include these criterion types:

  • Form/format of the final product (oral and written projects, cite sources)
  • Content accuracy (describe discovery biographical information, use terms appropriately, check sources)
  • Processes/procedures (research, organize information, edit)

But what about these?

  • Impact (analyze significance; past to present)
  • Knowledge construction (reflect on the essential question)

Only the latter two criteria assess transfer of learning and deeper understanding, so consider including at least one of these criteria. Learning can happen even if a final product fails to meet higher levels. Remember, assessing student self-reflections can uncover learning that your performance task might not have addressed!

Next, identify all criterion types that align with the intended learning. For example, on a research paper, you’d start filling in the rubric template, by describing proficient performance—what did students do to show completion of the task? Here are some examples:

  • The research includes multiple credible sources.
  • Biographical information is provided along with relevant visuals or props.
  • Evidence explains why the discovery was significant at the time and whether it still has importance today.
  • The conclusion goes beyond summarizing ideas and includes personal reflections about what it means to be a giant in science or any field.

Question 2

Do the number of performance levels support accurate evaluation of progress?

An odd number of performance levels (e.g., low-middle-high) usually results in most students scoring in the middle, especially when the lowest level describes “doing little or nothing” and the highest level describes perfection. An even number of performance levels (e.g., getting started–developing–proficient–advanced) drives more qualitative decision-making, describing what evidence learning looks like along a continuum.

Question 3

Is the rubric language descriptive, observable, and measurable?

Avoid using negative or judgmental language (e.g., behavioral indicators rather than learning evidence), subjective language (e.g., poor, neat, ample, sophisticated), and vague frequency indicators (e.g., rarely, often, sometimes), and focus on evidence of learning.

Adjacent performance levels should describe qualitatively different and achievable performance. Consider how changes to rubric wording facilitate interpretation and agreement when grading student work and provide guidance to students.

Wording before

Level 1: May be missing central claim, supporting subclaims, or relevant counterclaims.

Level 2: Partially develops central claim, supporting subclaims, and relevant counterclaims.

Wording after

Level 1: Identifies one or both of the following: Possible claims and possible counterclaims.

Level 2: Identifies all of the following: Central claim, supporting subclaims, and possible counterclaims.

Question 4

Do all levels describe performance in the positive—what is happening, rather than not happening?

Rubric levels can reflect how learning and task completion might naturally progress and provide a road map that illustrates to students what novice to expert performance looks like. Using “I can…” statements, such as “I can use information from at least one source to describe the historical event,” leads students to think about what they’re learning to do at each level and describe under what conditions they can be successful.

Question 5

Do numbers emphasize quality over quantity?

Rather than using numbers to ask for more sources or examples that might not improve quality, consider using numbers to describe successful completion of performance task parts that describe progress. This approach is less subjective than simply giving a score between 3 and 4 because it isn’t quite a 4. Some teachers I’ve worked with use a range of numbers at one performance level to describe each part successfully completed at the proficient level.

Question 6

Does the rubric use kid-friendly language that encourages peer- and self-assessment?

Consider the message that students get when they see negative performance-level headings like these: “Unacceptable,” “Below,” “Limited.” When students have input into the writing, refinement, or “translation” (rewriting rubrics in their words) of performance-level descriptions, they’re more open to own their learning. In my classroom, I created interactive “What I need to do” rubrics that focused on proficient performance, requiring students to identify their evidence, which dramatically improved the quality of their work.

Peer collaboration is the best tool for developing high-quality rubrics. Take out your performance assessment rubrics and examine potential areas of improvement with colleagues. You may discover that it’s time for your rubrics to get a reboot.

Share This Story

  • email icon

Filed Under

  • Assessment
  • 6-8 Middle School
  • 9-12 High School

Follow Edutopia

  • facebook icon
  • twitter icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use

George Lucas Educational Foundation

Edutopia is a free source of information, inspiration, and practical strategies for learning and teaching in preK-12 education. We are published by the George Lucas Educational Foundation, a nonprofit, nonpartisan organization.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.