Thursday, June 13, 2019

Why multiple-choice questions are (too often) problematic

Research shows that too many multiple-choice questions are written poorly and therefore create bad assessments. A few of the common issues with multiple-choice questions are, according to research, is that they too often
  • are unclear or otherwise poorly written.
  • are too easy to guess.
  • only test recall of content.
  • don't measure what they intend to measure.
  • become a test of something other than whether the test taker knows the content.
What's wrong with the following multiple-choice question?

Which of the following is not a good way to put out a grease fire in a pan on the stove? (Select the best answer.)
  1. Smother the fire with a metal lid.
  2. Smother the fire with water. 
  3. Smother the fire with baking soda or salt.
  4. Smother the fire with Class B dry chemical fire extinguisher contents.
The correct answer is 2. But research shows that many people who know the answer will get the question wrong anyway. That's because negatively-worded questions are harder to understand and easier to mess up. Answers 1, 3, and 4 are acceptable ways to put out a grease fire in a pan on the stove. And since they are correct, people are likely to select them and get the question wrong.
These and other problems (such as the problems in the list above) lead to invalid questions and assessments. Validity is the most important criteria for a good test. Validity refers to whether the test measures what it claims to measure. If it doesn't measure what it claims to measure, the test answers provide little (or inaccurate) information about what people know or can do. Those tests waste time and resources. If the test is used to make decisions (proceed to next course, prove competence, etc.), poorly written tests are a legal battle waiting to happen.

To make assessments more valid, there must be a very clear match between learning objectives and assessment items. Research shows that this is way too often not the case.

Instructional writing, as I discuss in my book, Write and Organize for Deeper Learning, is different than other kinds of writing. Writing multiple-choice questions is specialized instructional writing. Clarity and readability are critical. But here's something that multiple-choice questions also must do. They must be written so that participant's answers show who knows the content and who doesn't.

In the multiple-choice question at the beginning of this post, the negatively-worded question made it harder to understand. As a result, it was harder to answer correctly. Which makes the answer harder to interpret. If someone selects the wrong answer to the question at the beginning of this post, how sure are we that they didn't know the correct answer? We aren't sure.

Luckily, research also offers clear and actionable tactics for making questions clearer and a better match to learning objectives. I used to do multiple-choice questions writing workshops for companies and higher education staff development. I loved teaching them. But one day wasn't usually long enough to gain the needed skills and I feel a great need to help people gain real skills. 

So I decided to build my first hands-on skills course on assessments and writing multiple-choice questions. It's a critical skill and it's rarely taught. You can learn more (a LOT more) and register. Or ask me to deliver this course for your team. 

Can I ask you to do me a favor? Please tell others about my Write Learning Assessments course and send them the link ( I am building a set of instructional writing courses and this is the first. 

Chiavaroli, N. (2017). Negatively-worded multiple choice questions: An avoidable threat to validity, Practical Assessment, Research & Evaluation, 22(3), 1-14.

Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37-50.

Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 51-78.

Hopkins, K.D. (1998). Educational and psychological measurement and evaluation. Needham Heights, MA: Allyn & Bacon. 

Marsh, E. J., Roediger, H. L., Bjork, R. A., & Bjork, E. L. (2007). The memorial consequences of multiple choice testing. Psychonomic Bulletin & Review, 14, 194-199.

Marsh, E. J. & Cantor, A. D. (2014). Chapter 02: Learning from the test: Dos and don’ts for using multiple-choice tests, in McDaniel, M. A., Frey, R. F., Fitzpatrick, S. M., & Roediger, H. L. (Eds.), Integrating Cognitive Science with Innovative Teaching in STEM Disciplines, Washington University, Saint Louis, Missouri.

Roediger, H. L., III, & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 1155-1159.

Schuwirth, L. W. T. & van der Vleuten, C. P. M. (2004). Different written assessment methods: what can be said about their strengths and weaknesses? Medical Education, 38, 974–979.

Shrock, S. A. & Coscarelli, W. C. C. (1989). Criterion-referenced test development. Reading, MA: Addison-Wesley.

Friday, May 24, 2019

Non-Conscious Aspects Of Learning And Performance

Being on autopilot has a lot of implications for learning and performance. Recently, Guy Wallace (@guywwallace on Twitter) posted about experts having difficulties figuring out what people must learn to perform a task. But experts often unintentionally leave things out. Their performance is highly automated so they no longer have conscious access to exactly what they are doing.
Automated and non-conscious prior knowledge is stored in long-term memory. An expert’s deep prior knowledge makes them far more capable of solving difficult problems in their area of expertise. But because it’s automated and non-conscious, they’re often unaware of exactly what they are doing.
Guy pointed me to Richard Clark’s article, The Impact of Non-Conscious Knowledge on Educational Technology Research and Design. And this article turned out to be a goldmine of important information. Experts, research finds, tend to be conscious of the physical actions they take, as well as the knowledge they use. But they are much more unaware of the mental activities used to perform tasks and solve problems.

Monday, May 20, 2019

Should We Use Background Music With Instruction? No.

The general rationale for not using background music is that it increases harmful cognitive load. Cognitive load relates to mental processes (like perception, thinking, and organizing) used for thinking, learning, and working. Working memory needs to process new information but it has considerable constraints (in capacity for new material and holding time). John Sweller, a well-known researcher and writer on memory and cognitive load and other aspects of learning, reminds us we must design with how our mental processes work. If we don’t, people can’t learn. And learning quickly is a mandate for current organizational conditions.
There are two types of cognitive load: helpful and harmful. We call the harmful type extraneous cognitive load and, when we don’t reduce this type of cognitive load, we make it harder to learn. Here are some examples of extraneous (harmful) cognitive load:
  • Too much content
  • Decorative and irrelevant graphics
  • Unnecessary explanations
  • Unnecessary media
Stop reading for a moment and think about why these items cause harmful cognitive load, given what I told you about working memory (Really! Try to answer the question before going ahead). Then look at my answer below.
Read the entire article on eLearning Industry.

Microlearning, Macrolearning. What Does Research Tell Us?

In the last year I have increasingly hear L&D practitioners talk about microlearning like it’s “the answer.” What is it the answer to, exactly? The response: Nearly everything. But knowing that we must create learning experiences that fit specific needs, I felt doubtful. Still, until I understand what the preponderance of research says, my opinion is just a guess based on what I already know. As a result, I set out to learn more and this article sums up what I learned.
What does research say about microlearning? In this article, I’ll offer some definitions of microlearning that offer clues about important aspects and explain what research and researchers have to say about microlearning. I’ll compare what people say are the benefits of microlearning against what we know from research. And I’ll discuss what micro and macro approaches offer workplace learning and how we might use each.
I can sum up much of this article with a specific insight from Professor Christian Glahn at the Hochschule für Technik und Wirtschaft, who studies learning and work:
Microlearning is not the solution to all workplace learning needs.
Read the entire article on eLearning Industry.

How Well Do We Learn From Experiential Or Inquiry Learning Approaches?

Direct instruction directly teaches the content. People are supplied with content and activities that help them build needed background knowledge. And we make sure that what they know is correct and usable. Indirect approaches use experiential or inquiry methods that prompt discovery of needed information and often simulate and test performance.
Training people to identify hazardous materials in the workplace, for example, would likely have lessons, labs, and tests in a direct approach. In an experiential approach, people would likely work through scenarios or case studies.
Paulo Freire, a learning theorist, disapproves of what he calls the “banking model of education,” where teachers (or trainers or instructors) deposit information into students’ heads. Learning sciences clearly shows that we cannot directly fill people up with knowledge (my new book, Manage Memory for Learning explains how we do learn). People do not “record” what they learn during instruction for playback during application.
Read the entire article on eLearning Industry.

Does Time Matter For Learning? It Does.

Stakeholders who request workplace training and other performance interventions often push for speed over quality. Workers are busy and time to learn is time where people could be accomplishing job tasks. We design primarily for speed as a result. Some of the most important learning tactics, such as adequate and varied practice and practice for remembering, are often left out.
For example, sales training for new mobile phones may include phone specifications, images, and diagrams. Designing training for speed too often doesn’t include practice needed for performance. For example, practice over time remembering key specifications helps people use the specifications on the job. Varied practice helping customers select from the newer models for their needs helps people use the specifications in helping people select the right phone. Research shows these types of practice are among key tactics for making training stick and useable.
Speed is a key part of efficiency. Efficiency is the time, effort, and other resources it takes to do something. Efficiency, however, isn’t an adequate outcome unless it also achieves the needed outcomes.
Read the entire article on eLearning Industry.

What Research Tells Us About Chunking Content

Research by usability experts Nielsen Norman Group tells us that people prefer content that is logically divided or chunked. They define chunking as breaking up content into smaller, distinct units of information (chunks), as shown in the right column of Figure 1. This is as opposed to presenting content in homogeneous blocks like in the left column of Figure 1.
Chunking doesn’t mean simply breaking up text into smaller pieces. It means breaking them up into relatedlogicalmeaningful, and sequential segments.
Read the entire post at eLearning Industry.

Why multiple-choice questions are (too often) problematic

Research shows that too many multiple-choice questions are written poorly and therefore create bad assessments. A few of the common issue...