Thursday, June 13, 2019

Why multiple-choice questions are (too often) problematic

Research shows that too many multiple-choice questions are written poorly and therefore create bad assessments. A few of the common issues with multiple-choice questions are, according to research, is that they too often
  • are unclear or otherwise poorly written.
  • are too easy to guess.
  • only test recall of content.
  • don't measure what they intend to measure.
  • become a test of something other than whether the test taker knows the content.
What's wrong with the following multiple-choice question?

Which of the following is not a good way to put out a grease fire in a pan on the stove? (Select the best answer.)
  1. Smother the fire with a metal lid.
  2. Smother the fire with water. 
  3. Smother the fire with baking soda or salt.
  4. Smother the fire with Class B dry chemical fire extinguisher contents.
The correct answer is 2. But research shows that many people who know the answer will get the question wrong anyway. That's because negatively-worded questions are harder to understand and easier to mess up. Answers 1, 3, and 4 are acceptable ways to put out a grease fire in a pan on the stove. And since they are correct, people are likely to select them and get the question wrong.
These and other problems (such as the problems in the list above) lead to invalid questions and assessments. Validity is the most important criteria for a good test. Validity refers to whether the test measures what it claims to measure. If it doesn't measure what it claims to measure, the test answers provide little (or inaccurate) information about what people know or can do. Those tests waste time and resources. If the test is used to make decisions (proceed to next course, prove competence, etc.), poorly written tests are a legal battle waiting to happen.

To make assessments more valid, there must be a very clear match between learning objectives and assessment items. Research shows that this is way too often not the case.

Instructional writing, as I discuss in my book, Write and Organize for Deeper Learning, is different than other kinds of writing. Writing multiple-choice questions is specialized instructional writing. Clarity and readability are critical. But here's something that multiple-choice questions also must do. They must be written so that participant's answers show who knows the content and who doesn't.

In the multiple-choice question at the beginning of this post, the negatively-worded question made it harder to understand. As a result, it was harder to answer correctly. Which makes the answer harder to interpret. If someone selects the wrong answer to the question at the beginning of this post, how sure are we that they didn't know the correct answer? We aren't sure.

Luckily, research also offers clear and actionable tactics for making questions clearer and a better match to learning objectives. I used to do multiple-choice questions writing workshops for companies and higher education staff development. I loved teaching them. But one day wasn't usually long enough to gain the needed skills and I feel a great need to help people gain real skills. 

So I decided to build my first hands-on skills course on assessments and writing multiple-choice questions. It's a critical skill and it's rarely taught. You can learn more (a LOT more) and register. Or ask me to deliver this course for your team. 

Can I ask you to do me a favor? Please tell others about my Write Learning Assessments course and send them the link (bit.ly/DLAW0001). I am building a set of instructional writing courses and this is the first. 

References
Chiavaroli, N. (2017). Negatively-worded multiple choice questions: An avoidable threat to validity, Practical Assessment, Research & Evaluation, 22(3), 1-14.

Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37-50.

Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 51-78.

Hopkins, K.D. (1998). Educational and psychological measurement and evaluation. Needham Heights, MA: Allyn & Bacon. 

Marsh, E. J., Roediger, H. L., Bjork, R. A., & Bjork, E. L. (2007). The memorial consequences of multiple choice testing. Psychonomic Bulletin & Review, 14, 194-199.

Marsh, E. J. & Cantor, A. D. (2014). Chapter 02: Learning from the test: Dos and don’ts for using multiple-choice tests, in McDaniel, M. A., Frey, R. F., Fitzpatrick, S. M., & Roediger, H. L. (Eds.), Integrating Cognitive Science with Innovative Teaching in STEM Disciplines, Washington University, Saint Louis, Missouri.

Roediger, H. L., III, & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 1155-1159.

Schuwirth, L. W. T. & van der Vleuten, C. P. M. (2004). Different written assessment methods: what can be said about their strengths and weaknesses? Medical Education, 38, 974–979.

Shrock, S. A. & Coscarelli, W. C. C. (1989). Criterion-referenced test development. Reading, MA: Addison-Wesley.



4 comments:

  1. LOVE articles that help change mindsets and get people to rethink how they ask questions. Thanks Dr. Patti!

    ReplyDelete
  2. When do you plan to run the Write Learning Assessments course again, Patti?

    ReplyDelete
  3. The next one is January 2020. If you sign up for my mailing list at the top of pattishank.com I'll send you an invite when it's ready. (Soon.)

    ReplyDelete

Thank you for commenting!

How many answer choices is best for a multiple-choice question? Probably not what you think.

Last week I discussed a quiz I developed   to help people analyze what they know and don't know about developing valuable and valid mult...