Tuesday, September 03, 2013

Types of questions for exams

QUESTION: Give as many types of exam questions you can, give examples, and comment on if this is a good type of question.

My answer below.

  1. A problem that some students can get right even if they never had the course because they have seen it in some other course. EXAMPLE: In a course on Ramsey Theory have a question that uses the Prob. Method. PRO: The question is still in scope for the courses. CON: A bit awkward that someone may have learned the material elsewhere. UPSHOT: This is FINE.
  2. A problem that some students can get right even if they never had the course because they are quite clever. EXAMPLE: Easy Combinatorics or Probability in a sophomore Discrete Math Course. PRO: The question is still in scope for the courses. CON: A bit awkward that someone may have missed class but still got it right. UPSHOT: This is FINE.
  3. A rigged question--- students saw two examples in class, two examples on the HW and now have to do one themselves. EXAMPLE: proving numbers irrational. PRO: Clearly in scope and fair. PRO: They will surely understand what you are asking for. CON: They may get it right via memory rather than understanding (they may not even know the difference.) UPSHOT: This is FINE though it requires some planning ahead of time.
  4. A rigged question with a twist--- students saw two examples in class, two examples on the HW and now have to do one themselves but its DIFFERENT in an important way. EXAMPLE: In class and HW do many problems like Here is the distribution, here is a random var, what is its expected value but on the exam give Here is a random var, here is what we want for the expected value, give a distribution that gives us that. PRO: Harder to memorize template. CON: May be hard to grade as they say odd things. CON: May be confusing to know what you are asking for, even for good students. UPSHOT: This is FINE though it requires some planning ahead of time.
  5. A problem that requires utter mastery of the material but no creative thought. EXAMPLE: Give the algorithm (that we did in class) for proving that a CFG's are in P. Write it up so that someone who had never seen it can understand it. PRO: Straightforward yet hard to get via memorization. CON: Might be too time consuming for an exam. CON: (From experience) no matter how much you say in bold letters things like Write it up so that someone who had never seen it can understand it. They will skip steps and write it up badly and its hard to tell if THEY really know it. UPSHOT: I do this but only in certain cases.
  6. A problem that requires them to be creative (this is ill defined but its the opposite of the one above). PRO: If they truly understand the material they can do this. CON: My PRO may be incorrect. UPSHOT: Absolutely fine for HW which are not worth much for the grade anyway and I can enlighten them. I tend to avoid these on exams. Though the line between creativity and standard is a thin one. (Problem for an exam: How thin in millimeters?)
  7. A giveaway question. When I teach Formal Lang Theory I have (going back to when I was Harry Lewis's TA in 1981) have on the exam Give an example of a string of length 4 over the alphabet {a,b}. An unintended consequence- if they CAN"T do this its a really bad sign. I have asked this question many times and I have literally NEVER seen someone get it wrong and pass the course. I have gotten the following answers: ab*, ababa, and a DFA recognizing aaaa (that I was tempted to give credit to but did not). Incidentally, the most common right answer has always been abab. Second is abba. PRO: I have this one early in the exam to calm them down.
I try to ask some of each type on an exam. However, sometimes a question can be easier or harder than you intended, or be harder to grade then you thought, or not be in category you thought it would be in. The hardest line to draw is which questions are a matter of mastery and which are a matter of creativity? Another issue- some students can abstract better than others.

When teaching a large course such as Sophomore discrete math (150-200 students) I tend to get a uniform distribution skewed a bit on the high side. More precise: I tend to get at roughly 10 students in EVERY 10-point interval: 0-10, 10-20, 20-30,..., 90-100, with less on the low side and more on the high side. The benefit of this is that the students who get (say) less than 40 CANNOT say Well--- everyone did badly. They really are send a signal to either work harder or drop (I tell them this directly as well). I don't understand profs who give exams where nobody cracks 50/100 (I have heard this is common in Physics). They are wasting half of the grade spectrum.


  1. A true/false math question where they then have to prove their answer. A student who picks the wrong answer can figure that out during the proof stage and then correct their answer. A student who picks the wrong answer and proves it has proven they really don't have a clue.

  2. Off topic: Why on earth weren't the call for papers for both STOC 2014 and CCC 2014 published yet?

    1. If you google

      STOC 2014 the first relelvent hit (the second hit when I did it)
      goes to a webpage that has Topics/Call for papers- in paragraph
      form rather than list form so that may have confused you
      when you did the Google Search.

      The deadline is Nov 30. I do not know how much lead time they
      usually give.

      CCC I have not been able to find. Again, I do not recall what
      the usual lead time is so I do not know if this is unusual.

      BACK to my TOPIC- if you give a HW assignment where the answer
      can be found by Google, is it worth giving? YES if the point is to
      have them LEARN the thing they Google. NO if you are trying to test
      cleverness. The line is not all that clear.

  3. Bill, thanks. But Googling STOC 2014 does not give a call for papers for STOC 2014. It does give a one generic paragraph named "called for papers", but with no PC members, for instance.
    For CCC 2014, the situation is much worse. There's basically nothing.
    I guess these conferences shoot only for the regular participants from about 20 research centers in the US (and some more from other countries), and so they don't feel obliged to even publish their CFP.

    1. Point well taken- the webpage that we both saw has something
      CALLED `Call fo papers' (they left out the r- maybe a bad sign)
      but all it is is a list of topics. No List of whose on the
      committee, or where/how to submit.

  4. #5 is just agony to place on an exam and not so helpful to students or teaching staff, much better for homework. All other questions are really helpful and they may help to give you the distributions that you are seeing. Have you ever checked if students who could do questions of type #6 and all other questions were the A's, type #4 and all other questions the B's, and so forth? Would be interesting to see, but that's what my intuition would be.

    1. #5- asking them to write a (possibly long) proof or algorihtm so
      that anyone can understand it. When I have done this they are
      warned ahead of time (you will be asked one of the following
      three questions on the exam next week...) or I make it not THAT long
      a problem (you may assume Lemma XXX). Still hard to grade so never
      in a big class. In short- YES there are some issues, but it can be done right.

      The only trend I have noticed is that some of the bright-but-lazy
      students get the questions that don't require having gone to class
      but miss those that have. They don't do well in the end.

  5. I completely relate to your 7th type of question. Last semester, I encountered one on the final of my Theory of Computation class. The first question asked us to count the number of "l"'s (L's) in the string: "hello world". I was quite bummed to have spent an hour on the problem, only to learn afterwards that it was deemed "trivial" by peers. In the end, I got this problem wrong and failed the course, despite applying most of what I had learned:

    I first wondered if the generalized decision version of the problem (Given a string, a symbol, and a number, determine if #symbol <= n) was tractable or even decidable. I knew that it was not regular due to the Pumping Lemma, even when provided an ordering. Rice's theorem did not seem to obviously apply here. After trying various diagonalization arguments and failed reductions to the Halting problem, I convinced myself that the problem was likely decidable.

    The big hurdle was the abscence of a specified ordering on the input. Because of brain damage caused by a discrete math course, I cannot parse strings properly without error. I began by selecting arbitrary symbols of the input and checking for the presence of L's, individually. I was able to verify that there was at least one L present, but could not convince myself that L's found in subsequent trials were distinct from the first. I did discover at least 5 characters that were NOT L's.

    Since I had a lower bound (one L), I figured a good next step is to establish an upper bound. In a somewhat cheaty fashion, I used the knowledge that there are no more than 80, 12 pt, monospaced characters written on single line of a standard page. Since at least 5 of those are not L's, we have an upper bound of 75 L's.

    Though handicapped, I knew that the physical page layout preserved the ordering of the input. In my excitement, I folded the paper lengthwise, partitioning it into two sections. I wanted to properly sample this time, so I sought a good source of random bits. I asked the proctor to generate a random sequence of 75 ones or zeros for me. After chuckling, they audibly gave me one that consisted of all ones. Surprised, I accepted that low probability events do occur. Using this sequence, I randomly checked symbols from each section independently. Eventually, I found TWO distinct L's. This improved my lower bound to 2 L's.

    To avoid tearing, I chose not to fold it further. I requested blank paper in the hopes that I could use it to "compute" on the exam (computation is local, after all). After some involved folding and test runs, I believe I succeeded in creating a bizarre paper TM head that could parse the input for me. I used a single strip of paper that slid through the machine as a way of encoding the answer. Every time my machine found an L, I tore an extra piece off from that page. At the end of the computation, the number of torn pieces determined the number of L's.

    Time elapsed during the middle of my computation on the "hello world" string. I requested and was permitted an extra 5 minutes to finish. I rushed the process, frantically tearing pieces off my strip as I witnessed new L's. The proctor became quite upset at the sight of a student shifting around a complicated origami "sculpture" while tearing up a long strip of paper. Much to my objections and despair, the proctor violently pulled the exam and machine away. Through the shower of small bits of paper (including those from the test computations), I cried as I watched my creation get crumpled up in their hands. This very much has been the most traumatic and saddening moment in my life so far.

    I somehow managed to get a 2/100 on the exam, probably because of my lower bound. I guess I really don't have what it takes to do well in cs theory. Since the course is required for a degree, I am retaking it next semester with another instructor. I hear they prefer oral exams rather than written ones. Perhaps things will go better for me next time.