top of page
Writer's pictureSilvia Bastow

Multiple-Choice Quizzes (MCQ)

 Following my previous post on Thinking about assessment where I have mentioned using multiple-choice quizzes (MCQ) for formative assessment, I had a few messages from colleagues asking about the efficiency of them.


MCQs are seeing a rise in popularity in the classroom, especially when used for Retrieval Practice or formative assessment. Many schools have used them during the Pandemic via Google or Microsoft forms to monitor students’ progress and learning. However, there are also some concerns from practitioners that they might be much better for subjects such as sciences, but not considered as appropriate for arts and humanities. 

I believe that in the languages classroom, a well designed MCQ can be powerful for identifying what core knowledge (vocabulary or sentence structures) learners know and for addressing gaps in knowledge or for dealing with misconceptions.


Expanding on the work of Dylan William, for MCQ to be valid and effective, Daisy Christodoulou (Hendrick and Macpherson, 2017) recommends avoiding the use of answers that are obviously incorrect and instead preparing answers that are incorrect but still plausible, as well as not telling students how many correct answers there are to reduce the likelihood of guessing. For novice learner, however, Christodoulou advises to provide the number of possible answers to begin with as this could act as scaffolding to help to avoid cognitive overload.

Blake Howard also recommends always to offer the ‘don’t know’ option, as this will also avoid guessing or hiding students’ gaps in knowledge if they guess correctly.


Although some critiques of MCQ strategy for formative assessment/retrieval practice argue that many of these quizzes focus on declarative knowledge – simple recall of vocabulary and paired associates (Dulonsky et al, 2013), I believe they can be used also to assess students’ procedural knowledge and allow the teacher to assess students’ understanding of how to apply this content before they attempt i.e. extended piece of writing.


This process can identify and address common errors and misconceptions much faster and before students move to the next stage – independent piece of work rather than trying to ‘unpick’ them from the piece of work.


When creating MCQ, Christodoulou also suggest, that students could be asked to add a confidence score  (1-5 for each question), indicating how confident the learner is with his/her answer. The rationale behind this is that if students find that a question they gave the score of 5 (absolutely confident) is incorrect, they are more likely to learn the correct answer to that question due to the hypercorrection effect (The Research Ed guide to assessment). However, I personally, have not tried this in my classroom practice yet, so can not confirm how effective this is. It is something I am intending to explore in the new academic year.


It is my belief, that if we want MCQ to support students’ durable retention of knowledge, the emphasis needs to be on being able to create good questions which support students’ declarative and procedural knowledge within our subject domain.


Below I will share some examples of MCQ that I will be using in my own classroom. I am open to further ideas and constructive feedback as I am sure within the M(F)L community there is a lot of expertise, knowledge and experience.


Examples of MCQ

Comentários


bottom of page