This op-ed by Richard Thaler caught my attention because I have a similar experience.
In my statistics classes, I have noticed a pattern: if the mid-term exam is hard, with a lower average score (say 75-80%), the students look crestfallen and feel that they did not learn; eventually, when it comes to evaluating the instructor, I receive lower grades, with comments indicating that I have not taught them properly to do well in exams.
When the mid-term exam is easier, I get more positive feedback. (This happens even if the two cohorts get similar inflated number of As.)
To solve this problem, Thaler tells his students that in his class, the top score is not 100 but 137. Now, the average student gets around 90 points or so, and he reports that the student feels happier even though on a relative scale, he or she didn't do better than others in prior classes!
***
I am considering trying this out next term although I'm not sure it works.
In the case of my students, the source of anxiety, I think, is self-doubt. Many of my exam questions are open-ended. I emphasize this early and often because real-life data problems do not have simple solutions that involve applying some textbook formula. I also tell students that there are many acceptable solutions to these questions.
For example, instead of "Use this formula to deal with the miscoded data in column X", I'd ask: "There is a problem in column X. Identify it and show me how you'd solve the issue."
I think the trouble is the students (despite being in a statistics class) want the certainty of knowing that they got the points. They may not yet have developed the conviction in their own solution strategies. They feel anxious because of that uncertainty. They want to know what the answer key is, even after I emphasize that different answers, even contradictionary, ones may be accepted (not if they come from the same student though).
***
Another challenge of implementing Thaler's idea is what should be the maximum score. The average score of the class depends on who the students are and how hard the exam is, and I don't have the exams set before the syllabus is drawn up.
***
A third point, which merits a separate post, is the discomfort with "manipulating emotions". Recall that some Facebook researchers were accused of doing so for running a randomized experiment (link). Here, Thaler is presenting a method to manipulate emotions as a celebration of behavioral economics.
Should student self-worth that is tied to getting As and 90% scores (no matter the actual worthiness) be a paramount objective for educators? Right now, as Thaler points out, this is the case because administrators use course evaluations as the sole source of information about an instructor's ability. This loops back to the previous post: metrics cause perverse behavior; also, the need to collect small data, in this case, classroom observations, and interviews with instructors.
***
Happy Memorial Day weekend for those in the States.
I am in an online analytics program. What you described in the first part about the open ended nature of answers absolutely rings true for my program and the intent of the educators for the students to learn outside the textbook.
Unfortunately, because of the online nature, we get much multiple choice and a lack of nuance. I would give a ton for a professor to evaluate and assign more than one "right" answer.
Posted by: Matty | 05/22/2015 at 11:21 AM
One thing I wish my Grad school instructors would do is to provide one sample test before the first test. It always seems to take me one test to figure out the testing style of the instructor and I end up having to crush that next series of tests to get the A.
I do like the irony of using a non 0-100 based scale in a stats class where people should be able to understand the distribution of the scores anyway :).
Posted by: John McClenny | 05/22/2015 at 11:35 AM
John: I do give out a practice exam before the exam. Interestingly, only about half the students study it, which prompted my advice column here.
Matty: It's nice that some students realize it takes a lot of time and effort to set open-ended questions, and to grade them (i.e. I have to read and understand your answers, rather than keyword matching). Instructors are not doing this probably because students show no appreciation for it.
Because questions are open-ended, I don't provide answer keys to my practice exams but offer to provide feedback to anyone who emails me before the exam. This practice has led a few students to grade me F.
Posted by: Kaiser | 05/22/2015 at 01:47 PM