How to Design Better Tests for Students | Edutopia
Edutopia on Facebook
Edutopia on Twitter
Edutopia on Google+
Edutopia on Pinterest Follow Me on Pinterest
WHAT WORKS IN EDUCATION The George Lucas Educational Foundation
Subscribe to RSS

How to Design Better Tests for Students

Ben Johnson

Administrator, author and educator
Related Tags: Assessment, All Grades
  • Facebook
  • Twitter
  • Pinterest
  • Share

I am taking a class with Cisco Networking this summer, and we are doing the equivalent of a semester of a high school course in one week. We spend two hours listening to a lecture of the key points, and then we take a test. In order to continue in the program, we have to pass the test with an 85 percent effort. Talk about high stakes!

When I was taking the tests, I noticed something. Because we often studied the chapter the night before, you would think the tests would be easy for us, but some of the questions on the test are designed to trick you into selecting the wrong answer. So even though I'm well versed in the material, and it is fresh in my mind, I still have yet to ace one of the tests. (Grrr.)

Although I am passing them reasonably well -- some just barely -- it is frustrating to not get it perfect. The fact that others are struggling too, helps, but the one student in the class who gets 100 percent gets open congratulations and a silent "I'll catch you yet!"

Anyway, this got me thinking. If we want to test a student's knowledge, shouldn't we just straight out ask the question? Is it necessary to throw in distractors, misleading answers, and ones close to the right answer, but not quite? Is that fair?

Hold that thought. I remember taking an assessment-design class in college and saying to myself, "What teacher is going to have time to make all these test questions and design a scientific pretest and posttest for every unit?" I have since discovered the answer to this question: I didn't make time, nor do many other teachers.

Another question, then: If valid assessment is so important, how do you do this? Before I answer that, let's go back to my other thought.

Differentiate Test Questions

Distractors, misleading answers, and trick questions are important in establishing level of difficulty; we can't get rid of them! Remember, we have to differentiate our instruction, so why not our tests, too?

Another reason is that there are two types of tests: summative and formative. One of the safety nets of my Cisco class is that our instructor gives us three chances to take the test. After taking each exam, we can look at the right answers and figure out what we did wrong and then take it again.

This is the main characteristic of formative assessment -- a chance to reflect, and then try again. When I go back and figure out how I blew the question, I actually learn more. It sticks in my brain better!

Back in my college assessment-design class, I learned that there need to be some easy questions, some challenging questions, and some hard questions on each test. A sophisticated teacher will assign different point values for each. A more sophisticated teacher will make sure each question is also aligned with a state standard (but that is a conversation for another day).

Anyway, the struggling student will most likely get the easy questions correct, while the advanced student will be challenged with the hard ones. Both feel that the test has made them stretch, and both can feel success in the questions they answered correctly.

Collaborate with Colleagues

How can we find time to create these tests? The way to find the time to do this is simple (unless you teach in a small school where you are the only teacher at your grade level and are willing to give up your summer vacation time to redo all of your tests by mining the textbooks and the Internet). You work with your teacher peers and come up with the tests -- pretests and posttests -- together. You share ideas, and you design a better mousetrap -- I mean test -- as a group.

Now if you consider the sophistication mentioned above, you all will be able to gather valuable student information that helps you compare teaching performance and predict student performance on state testing.

Some other time-saving tools for designing tests are turning to question-item banks, using Scantron answer sheets, and employing online test-taking tools. The key factor is that if a test is made with the collaboration of colleagues, it will surely be a better product, more useful, and, overall, less time consuming for the teacher.

We can't expect students to spend several hours studying each night and be excited about taking tricky tests, but we can challenge them with well-designed and useful assessments so they can learn and experience successes from meeting the challenges. Please share with us how you like to challenge and excite your students with difficult exams and quizzes.

Ben Johnson

Administrator, author and educator
Related Tags:

Comments (36)Sign in or register to postSubscribe to comments via RSS

Ben Johnson (author)'s picture
Anonymous (not verified)


You bring up some good points. Professionalism in testing. What does that mean? Well, it means that rather than teaching teaching teaching and then testing, we pretest, teach and post test, pretest, teach and post test and then pretest, teach and post test. The tests must be created before we teach (no this is not teaching to a test--it is knowing precisely what your students will be needing to know before you even start teaching).

Although I agree with you in some parts of what you say, I feel compelled to help clarify your thinking. 20 years ago, we probably could have gotten away with the shotgun differentiate for everybody approach. But today, I think the scales are decidedly tipped with the large majority of students who learn best bodily kinesthetically. That does not mean that they do not have to learn how to learn auditorily or visually, but it means that we are teaching and testing using methods that do not match the student preference. So, if we are smart, we will teach to their strengths (again, not ignoring the other methods of learning). Why not also test to their strengths? There are many ways to find out if a person has learned, not all of them have to be paper and pencil. As a Spanish teacher, I knew when the students had understood a request in Spanish because they actually did something I could see. Parate por favor! (stand up please) If they stood up, then they knew what I had said. Along with written tests, I did verbal tests too. We therefore, need to teach and test, heavy on the bodily kinesthetics.

Hope that helps.

Ben Johnson
San Antonio, Texas

Kylene's picture
Anonymous (not verified)

I find this topic particularly interesting as a music educator, for many times my co-teachers and I struggle to find ways in which to assess and test our students. Challenged by our administration to make courses more rigorous, my colleagues and I have been working to design assessments that are appropriate for all types of learners. Also, some of the principals have asked to see more "paper and pencil" assessments from the music department, which leads into discussions of some of the issues mentioned above, particularly, how to best differentiate the test questions.

The subjective nature of music can be difficult to properly express on paper for younger students, while older students may have more knowledge, but be unable to actually produce some techniques on their instrument. Additionally, some of the classes I teach consist of two to four grade levels at a time, therefore the different age levels of students naturally fall into different categories of knowledge, skill, and ability, making testing even more challenging.

My colleagues and I differ in opinion on how to best assess abilities of knowledge and skill. Some music teachers feel that if a student is able to demonstrate a skill on their instrument, then they have met the requirements. However, other colleagues, including myself, feel that this type of assessment does not show if a student truly understands the concept of the particular skill and why it is necessary.

Perhaps a combination of these types of assessments would be best, however, we have not reached a balance in which we feel is appropriate. In efforts to hold students to high standers, I will continue to work ideas to strengthen the use of testing materials in my classroom. I look forward to discussing some of the ideas mentioned above with my colleagues so that we might challenge our students, as well as ourselves, as we work to help students achieve success in music.

Ashley's picture
Anonymous (not verified)

I think this is a very interesting topic for discussion. I don't believe in trying to trick students on tests. What's the point? There are other ways to help students come into higher levels of thinking, but tricking them only does just that...tricks them. I do like giving pretests, teaching, and then giving a post test. It makes the student responsible for his own learning. They see what they need to know when they take a pretest and they are challenged to learn it before they have to take a post test.

Danna Bonney's picture
Anonymous (not verified)

This topic is troubling by nature, due to the reality that creating fair tests and assessments must address the range of learning and teaching styles. I agree that fair assessments must be straightforward, clear, free from tricks and rich in opportunities for multiple expression of learning. Consistent and frequent reflection on what has been taught and learned, along with extension into analyzing and applying concepts, is critical. So, what do we do with the time-wasting, high stakes standardized tests we throw at our students? I'm interested in the concept of e-portfolios that appears to be slowly arriving on the standardized testing scene. Do you see this as something that will pick up and gain momentum in the near future? I am puzzled why more states don't adopt this practice as a valid assessment tool.

Ben Johnson (author)'s picture
Anonymous (not verified)


Regardless of what your administration does or does not promote, you have an obligation to test formatively and summitively. Formatively in order to give the students feedback on performance and opportunities to improve and summatively to show the value that your learning design has added to the students.

I had the same problem with my colleagues who wanted to test our language learning students solely by what was written on the test paper. This is certainly less time consuming and less subjective than oral tests, but I believed it was not a true picture of what the student knew. Some students could ace the written test, but struggled with the oral test and and others, visa versa. To get the full picture a product-based test is necessary. What can the student do with the knowledge is more important than what the students know about the knowledge.

I can't imagine a music class that does not require the students to produce music as part of the evaluation, whether formative or summative. Frankly, having the students take a written exam, as in learning to speak a foreign language, should be the minor emphasis, while the actual production should be the major. Knowing the written scales does little good if the student can't produce the notes, or knowing how to conjugate verbs is useless if the student cannot say the words in Spanish.

I am sure you will find a good balance of theory and performance.

Ben Johnson
San Antonio, TX

Ben Johnson (author)'s picture
Anonymous (not verified)


Thanks for commenting. You are a rarity if you give pre-tests and post-tests. Way to go!

You ask what the point is in "tricking the students?" Don't we want them to be successful? Why make it hard for them? There are two points I want to focus on to answer those questions.

As I mentioned in the blog, if we differentiate in our instruction, we have to differentiate in our testing. Differentiating means different levels of difficulty: hard, medium and easy. The purpose of hard questions is not to trick the student into making mistakes, it is to challenge them to do it right. In formative testing, the purpose of difficult questions is to challenge the students to learn from their mistakes and do it right the next time (they will remember better from their own mistakes more than a teacher just telling them what to know).

The second point I want to make about difficult questions is that when you jolt a student with one, you are actually creating more dendrites and a larger capacity to learn than if you just give them straight forward questions.

It is like teasing your kids at home about how spaghetti is harvested from the stalks of spaghetti plants, or how bees fly backwards when they get tired of flying forwards, or why the magnetic pull of the earth makes a buttered piece of toast always fall butter-side down. All of this is to get them to think critically (well, and have a bit of fun too). This critical thinking is sorely lacking in many of our student's academic careers.

Life is full of tricks, puzzles and conundrums for the students to figure out. Rarely is the answer in plain sight for all to see. We have to teach the students to work it out and find the answers that are not obvious.

I am curious about your reference to other ways of getting students to higher levels of thinking and how that relates to your lower level tests.

Ben Johnson
San Antonio, TX

Amanda Barrineau's picture
Anonymous (not verified)

I agree with you. Tricking students is setting them up for failure. I believe that you should have a pre-test, which allows you to prepare lessons for that week. I then have a post test that is designed in the same way that I taught the material.

Ryan Siegle's picture

I agree with your outlook on the importance of keeping assessment valid by refraining from tricking your students. Trickery essentially discourages students from what they know. In turn, this discouragement negatively affects motivation and a positive viewpoint on lifelong learning. While tricks on a test may be discovered by high achieving students, many low achieving students will give up or become frustrated.

I am currently taking a master's course in assessment and am specifically studying summative assessment. Like trickery, summative assessment may discourage low achieving students and may produce test anxiety. How can we as educators create tests that are effective and friendly to both the high achieving and low achieving students? What practical steps can teachers take to allow for optimal performance in all students?

Thank you in advance for your response.

J Recchio's picture

I have spent a number of years in college and post secondary education in the area of teaching and I agree that writing a good test is a time consuming part of the job but is essential at the same time. However, in my own personal education classes I have not had a class up until grad school that addresses writing an assessment. I do not recall having any instruction on how to write a good test. Most of the time you spend writing lesson plans, but not much is spent on assessment in my opinion. Has anyone else found this to be true? In a time where there are high stakes test and assessment seems to becoming more of a focal point of teaching I think we need to better prepare our younger generation of teachers for this process.

J Recchio's picture

I have spent a number of years in college and post secondary education in the area of teaching and I agree that writing a good test is a time consuming part of the job but is essential at the same time. However, in my own personal education classes I have not had a class up until grad school that addresses writing an assessment. I do not recall having any instruction on how to write a good test. Most of the time you spend writing lesson plans, but not much is spent on assessment in my opinion. Has anyone else found this to be true? In a time where there are high stakes test and assessment seems to becoming more of a focal point of teaching I think we need to better prepare our younger generation of teachers for this process.

Sign in to comment. Not a member? Register.