Teaching Mathematics: What, When and Why

An in-depth examination of mathematics education, topic by topic

A variation on multiple choice questions

from Terry Mills

On constructing mathematical questions. For some time, I have been opposed to the usual multiple choice questions. My reasoning is as follows. If I give the answer (c), then that tells you nothing about my learning. This week, my view has shifted. I like questions like this.

Which of the following statements are true and which are false? Justify your answers.

Allow me to clarify my idea about questions that are similar to multiple choice questions (MCQs) but do not share the same difficulties. I now like questions like this.
Which of the following statements are true and which are false? Justify your answers.

(A) Statement 1
(B) Statement 2
(C) Statement 3

All 3 statements deal with the same theme (e.g. quadratic functions, or correlation). Of course, there is nothing special about having 3 statements.

More time would have to be allowed for such questions than one might allow for a normal MCQ. Two marks would be allocated for each part: 1 mark for the correct answer (true/false) and 1 mark for the justification.

There is some subjectivity involved in allocating marks for the justification. However, I would take into consideration the year level of the students, and what we had covered in class up to the time of the assessment. I would have to frame questions so that students would not be tempted to write “I used my calculator”.

I am not completely satisfied with the wording of “Justify your answers” and welcome any suggested improvements.

20 responses to “A variation on multiple choice questions”

  1. […] that he dislikes multiple choice questions. Now, on Tom Peachey’s new blog, Terry has a post to discuss a suggested variation of […]


  2. Either something has been lost in the formatting cloud or your question is above my level of intellect.

    (A) the LaTeX formatting didn’t work.
    (B) I am of lesser intellect than the intended audience member.

    (Both answers are of course correct)


  3. I was initially confused also, before Terry added the follow up. Treat the post as a general format for the type of question, not as an example of such a question. The difference with conventional MCQs is that the testee is asked to explain.


  4. This does not seem to have the benefit (which is the only reason to use them at all) of MCQs: near-zero marking overhead. So I fail to see why this is better than just asking a short-answer question. Sorry!

    Liked by 1 person

    1. Glen (and others who are pro the MCQ). With a large sample size, I think good MCQs can be very informative if you also study the proportion of students who gave each answer – it can tell you as the question setter where the misconceptions are found. This does require the MCQs to be very well written and that the number of respondents is high enough to give a decent distribution.

      Liked by 1 person

    2. I too am opposed to the use of MCQ, mainly on the grounds that they can be unfair when the student knows more than the examiner. (Not that that would happen in Victoria.) Terry’s suggestion seems to overcome that problem as the student gets the chance to justify their answer – assuming that the examiner is open to learning, and bothers to read it …


      1. I mean, those objections seem to assume that the MCQ is badly written. Do you have any objection to the use of well-written MCQs? (In so far as they are a means to a marking time saving, and are used in combination with other kinds of questions.)

        I think that’s the only meaningful way to have a discussion. After all, I’m also against badly-written long answer questions! Down with bad exam questions, hurrah!

        Liked by 1 person

    3. johnnofriendofvcaa Avatar

      The marking overhead MOH is very low, but the writing overhead WOH is very large. Therefore a minimum number n of students that satisfies WOH < MOH is required. This is simplistic since the number of MCQs will also be a factor. For a conscientious teacher secondary school teacher, writing good MCQs will probably take longer than marking a comparable number of short answer questions. The goal is for someone conscientious in the teaching team to take a bullet for the others.

      Liked by 1 person

  5. Got it (I think).

    Another variation I have seen is where the different options are worth different numbers of marks (and can be negative for some of the choices). So I could ask a question about a function:

    Option A (gives basic recall) student who answers this scores 1 mark.
    Option B (gives a key detail that only someone who really understands the graph will see to be true) student who answers this scores 3 marks
    Option C (similar to B but not actually correct) student who answers this loses 1 mark.
    Option D (totally wrong in every sense) student who answers this loses 3 marks.

    This would take a lot longer to write but might be a bit more informative, overall?


  6. johnnofriendofvcaa Avatar

    The fact is, MCQ in its traditional form is here to stay because it’s a cheap way of testing. And if the question is not intended to ‘trick’ and bamboozle, but to simply probe understanding of a concept(s), wrong options that logically follow from a common mistake or misconception are not difficult to construct (but it is time consuming). So the answers can be very informative. In my opinion, the biggest problem with a MCQ is often the writer(s) of the MCQ.

    And I totally support Glen’s comment:
    “… I fail to see why this is better than just asking a short-answer question.”
    In other words, if MCQ’s are going to be replaced, replace them with short-answer questions! Not some convoluted amalgam of MCQs!

    Liked by 1 person

    1. I think commenters have missed the novelty of Terry’s suggestion. Traditional questions (multiple choice or not) are all based around the “correct” approach to a problem. Asking students to comment on a variety of approaches (even completely erroneous ones) will give a novel insight to the students’ development.


      1. Once upon a time, in a Study Design far, far away …

        There was a SAC assessment format called ‘Item Response Analysis’. A series of MCQs were given and students had to answer questions about each MCQ such as
        “What mistake has a student who chooses option A made?”
        “Explain why Option B is wrong”.
        “State the correct option, giving a reason for your answer”.
        “What change could be made to the question so that Option C is correct?”

        This type of assessment is no longer allowed by VCAA.

        Liked by 1 person

      2. Interesting Johnno. It does seem more directed than Terry is suggesting, in that some options are fingered as wrong. How was it received?
        I think that Terry has in mind within-classroom testing as part of the normal teaching process, where the VCAA writ does not extend. And achievement testing in the lower years?


      3. johnnofriendofvcaa Avatar

        ‘Item Response Analysis’ was a popular choice of format for a SAC back in the day.
        Note that specific options don’t have to be fingered as wrong. For example, you could ask:
        “Choose an incorrect option and explain the mistake made by a student who chooses that option”.
        “Choose a different incorrect option and explain how the question could be changed so that your chosen option is now correct.”
        “Choose the correct option and briefly explain why it is correct.”

        ‘Item Response Analysis’ was one of several formats the VCAA said could be used for a SAC, once upon a time. I don’t know why the VCAA decided to cancel it. Then again, many of the VCAA’s decisions (and indecisions) are inexplicable to me.

        If Terry wants “questions that are similar to multiple choice questions (MCQs) but do not share the same difficulties”, I think ‘Item Response Analysis’ is a good alternative.


  7. Your post seems very surface level in terms of just giving an opinion, not looking at the literature on the topic. There is a vast amount of research on MCQ instruments and signficant studies showing general efficacy at the population level. They are used in many fields like the military, entrance examinations, professional certification, AP exams, etc. And they do fine here. If you set the bar at overall benefit to the assessment/training process. Not at some “can’t ever have a mistake” level of Aspie math-y Euclidean thought process.

    All testing methods have limitations and tradeoffs that need to be made in terms of student and grader time. OF course you don’t see the thought process from an MCQ. So what. You’ve got a bunch of them. And a bunch of students. It works out, in general. Statistically.

    You are implicitly ignoring practical factors and assuming grader time is free. What if you have a large lecture hall course? Do you really think the teacher should care about the thought process on a specific wrong answers for a specific student? (In construction of the test, he will have considered common errors. But this is population level thinking, not student specific.)

    Not everyone has time/money for a personal tutor. Just like not everyone can afford paying a personal trainer to help with their workout. That doesn’t mean there’s zero benefit to doing a workout on your own. Similarly, there’s not ZERO value of a MCQ instrument. In fact, there’s a tremendous amount of value in them. Is it as good as having an oral exam in Oxford tutorial style? Probably not. But time is money. And MCQs are still very useful.

    Liked by 1 person

  8. Thank you everyone for your thoughtful, helpful comments.

    Liked by 1 person

  9. @johnnofriendofvcaa

    Thank you for your comments on item response analysis. I should learn more about this. There is a lot that we can learn about teaching from psychology.

    However, I have an issue with the question “What mistake has a student who chooses option A made?” To answer this question involves being able to read the mind of the student who chose Option A.

    Perhaps the student really meant to choose B (the correct answer) but accidentally wrote A. The student understood the question, knew that B was correct, but make a typographical error in writing A. What does the student’s answer tell us about the student’s learning?


    1. “Perhaps the student really meant to choose B (the correct answer) but accidentally wrote A. ”

      This is the Euclidean math-y mindset. Instead of looking for axiomatic perfection, just roll with the punches of statistical chance.

      As far as the error, the student will see it when he reviews it himself. There’s a pretty powerful feedback loop. Also, at a certain point, I don’t CARE if he knew the right answer and wrote wrong. Writing correctly is part of life. What if the kid is aiming a howitzer. Is it OK if he uses the tables correctly to aim the gun and just “has a typo” on the actual angle? No. I want that shell landing on the enemy, not on me.

      What you SHOULD have asked is what about people who just guess right. And the answer here is a guessing penalty. Or even better…just rolling with it and letting them have it. It evens out over time. You have plenty of levers to evaluate. And they will never know everything perfectly. Heck, there is not time to even test them on every permuation and combination of all the techniques they are learning (including different confounders).

      Just…get out of the mindset of worrying about individual students and individual questions. This is a multi-trial statistical event.

      This is not Andrew Wiles proof where every little bit needs to be correct. This is a process where even the A students (regardless of evaluation method) will have SOME gaps in their knowledge. Learning/teaching/grading are all human processes…much more similar to business or engineering processes. Go for the global maximum. Don’t get diverted into worrying about perfection on individual events.


      1. Speaking of mindsets …

        I guess it is natural that people reading the words “multiple choice questions” will think of formal exams. This is especially true on this site; it is a spinoff from Marty’s Bad Mathematics blog which focuses on the failures in local exams. But there is the more important use of testing as a component of classroom teaching. Not only does such testing reinforce students’ learning, it provides feedback to the teacher – what works, what doesn’t, what is needed before the next topic starts. And what is happening with each individual student. Terry’s submission seem concerned with these issues.

        So where exactly would such questions occur? If my class was less than 40, I would set a few questions to be handed in on a regular basis. It strikes me that this item response analysis would be valuable there.

        Liked by 1 person

    2. johnnofriendofvcaa Avatar

      ““What mistake is a student who chooses option A likely to have made?”

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: