George Lucas Educational Foundation
Assessment

Grant Wiggins: Defining Assessment

January 21, 2002

Grant Wiggins is a nationally recognized assessment expert who has been working in assessment reform for more than twenty-five years. He is president of the educational consulting firm Authentic Education, and with Jay McTighe, co-author of Understanding by Design, an award-winning framework for curriculum design used around the world. In this interview, Wiggins shares his thoughts on performance assessments, standardized tests, and more.

Wiggins has published several articles for Edutopia.org. In 2002, he wrote Toward Genuine Accountability: The Case for a New State Assessment System. In 2006, he wrote Healthier Testing Made Easy: The Idea of Authentic Assessment.

  1. What distinctions do you make between "testing" and "assessment?"
  2. What is authentic assessment and why is it important?
  3. Why is it important that teachers consider assessment before they begin planning lessons or projects?
  4. How do you assess project-based learning?
  5. How can technology support and enhance assessment?
  6. How do you respond to the argument that teachers don't have enough time to design and conduct authentic or performance-based assessments?
  7. Standardized tests, such as the SAT, are used by schools as a predictor of a student's future success. Is this a valid use of these tests?

1. What distinctions do you make between "testing" and "assessment"?

Our line of argument is that testing is a small part of assessment. It needs to be part of the picture. Many people who are anti-testing end up sounding anti-evaluation and anti-measurement. A good test has a role to play. The language that we like to use is, it's an audit. It's a snapshot. You don't run your business for the audit. You want more than a snapshot, you want a whole family album. But the audit and the snapshot have a place in the larger picture.

What can the test do that more complex, performance-based, project-based things can't do? Look for discrete knowledge and skill for the individual student. Many projects, because they're so collaborative, end up making you wonder, well, what about the individual student? What does the individual student know?

For instance, in some state-based, performance-based assessment, they always had a parallel paper-and-pencil test for the individual student so that you had enough data on the individual. A different way to say it -- and this is what scientists and researchers say -- is triangulate the information. Match the quiz against the project, against the PowerPoint® presentation. Now what's the whole picture say? So, what we would say is "testing" is one piece of a portfolio.

2. What is authentic assessment and why is it important?

Authentic assessment, to me, is not meant to be the charged phrase, or jargony phrase that it has come to be for a lot of people. When we first started using it fifteen years ago, we merely meant to signify authentic work that big people actually do as opposed to fill-in-the-blanks, paper-and-pencil, multiple-choice, short-answer quiz, school-based assessment. So it's authentic in the sense [that] it's real. It's realistic. If you go into the work place, they don't give you a multiple-choice test to see if you're doing your job. They have some performance assessment, as they say in business.

Having said that, there is a misunderstanding. People say, "Well, if it's not authentic, it can't possibly be a good assessment." We never said that. We never implied it. There's a lot of authentic work that doesn't make for good assessment because it's so messy and squishy and it involves so many different people and so many variables that you can't say with any certainty, "Well, what did that individual student know about those particular objectives in this complex project that occurred over a month?" So there's a place for unauthentic, non-real-world assessments. We're just making the distinction that you shouldn't leave school not knowing what big people actually do.

3. Why is it important that teachers consider assessment before they begin planning lessons or projects?

One of the challenges in teaching is designing, and to be a good designer you have to think about what you're trying to accomplish and craft a combination of the content and the instructional methods, but also the assessment. And one of the things that we've done over the past years in working with teachers is share with them how important it is to say, "What are you going to assess? What's evidence of the goals that you have in mind?" Otherwise your teaching can end up being hit-or-miss.

We call it backward design. Instead of jumping to the activities -- '"Oh, I could have kids do this, oh, that'd be cool" -- you say, "Well, wait a minute." Before you decide exactly what you're going to do with them, if you achieve your objective, what does it look like? What's the evidence that they got it? What's the evidence that they can now do it, whatever the "it" is? So you have to think about how it's going to end up, what it's going to look like. And then that ripples back into your design, what activities will get you there. What teaching moves will get you there?

4. How do you assess project-based learning?

It all starts with, well, what are our goals? And how does this project support those goals and how are we assessing in light of those goals? So, you would expect to see for any project a scoring guideline, a rubric, in which there are clear links to the project, to some criteria and standards that we value that relate to some overarching objective -- quite explicitly, that we're aiming for as teachers.

Sometimes we run into the problem that the project is so much a creature of the student's interest that there's no question that lovely learning occurs, but we sort of lose sight of the fact that now it's completely out of our control. We don't even know what it's really accomplishing in terms of our goals other than the kid is learning a lot and doing some critical and creative work.

What we have to do is realize that even if we give this kid free reign to do really cool projects, it's still got to fit within the context of some objectives, standards, and criteria that we bring to it, and frame the project in so that we can say by the end, "I have evidence. I can make the case that you learned something substantial and significant that relates to school objectives."

5. How can technology support and enhance assessment?

Once we get beyond the idea that assessment is more than just quizzes and tests -- and that it's the documentation of whereby you make this case that the student has done something significant -- this body of evidence, if we want to stick with that judicial metaphor, proves the student actually learned something.

Technology is an obvious partner because whether it's on a CD-ROM, floppies, or an old-fashioned technology like video cameras or even overheads, the student is bringing together visual, three-dimensional, and paper-and-pencil work. We want to be able to document and have a trace of what the student has accomplished and how the student got there.

Having said that, I think sometimes technology is overused and we don't think carefully enough about the evidence we need to give the grade, put something on the transcript, and track that information over time. Many well-intentioned people say, "Let's have student portfolios of the student's work K-12." Well, that's fine for the student, but there's hardly another human being other than the kid's family that wants to wade through all that.

And that's actually another role of technology: It's a good database system -- information management, storage, and retrieval whereby we say, "I don't want to look through the whole portfolio. I want to just see some samples, some rubrics to get a sense of the student's current level of performance." Tracking information over time through technology is actually an important part of it as well.

6. How do you respond to the argument that teachers don't have enough time to design and conduct authentic or performance-based assessments?

One of the criticisms often leveled at alternative forms of assessment -- whether we call them performance, portfolio, authentic, real-world, or project-based, -- is they're too time intensive, they're too expensive. It's too big of a hassle. What's the payoff? What's the cost benefit?

I can understand that argument at the state level. The state is in the audit business. And one of the things I think we've learned over the years is that given their need to save money, to not be too intrusive, to make it reliable as an assessment, then they may have to not do some of this. But many of those arguments that the critics make don't hold up at the district level at all. On the contrary, it's not very expensive. You've got all your own local people who are in the business of assessing. It's not inappropriate or a waste of time because you can't meet the standards without doing performance-based assessment.

7. Standardized tests, such as the SAT, are used by schools as a predictor of a student's future success. Is this a valid use of these tests?

Standardized testing has a role to play as an audit, but one of the things that many policymakers and parents forget, or don't know, is that these tests have a very narrow focus and purpose as audits. They're just trying to find out if you really learned the stuff you learned in school.

Whether these tests predict future performance or success -- they do not. Even with the SAT, ETS and the College Board are quite clear about what it does and does not predict. It just predicts freshman grade point average in the first semester. That's all. And there's plenty of studies to show that grades in college don't correlate with later success.

So, one of the things that people get in trouble with is assessment. It's like a bad game of telephone. Remember the game you played as a kid? What starts out as a perfectly intelligible sentence ends up being some wild distorted thing by the end.

Ten or fifteen years ago, the Secretary of Education was having wall charts about each state's SAT performances -- as if that was a measure of school and school-system success. But the SAT was invented as an aptitude test, not an achievement test linked to curricula. It was just about general intelligence. Let's be very careful about what we're making claims about, what these assessment results do and don't mean. Most state and national tests are predicting very, very narrow results about certain types of school performance. That's all.

Share This Story

  • email icon

Filed Under

  • Assessment

Follow Edutopia

  • facebook icon
  • twitter icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use

George Lucas Educational Foundation

Edutopia is a free source of information, inspiration, and practical strategies for learning and teaching in preK-12 education. We are published by the George Lucas Educational Foundation, a nonprofit, nonpartisan organization.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.