In the business analytics universe, the discipline of "business intelligence" is often frowned upon. Business intelligence is primarily generating reports on business metrics, tracking them over time, and producing ad-hoc analyses explaining these trends.
People often complain that such work is not challenging and not sexy. There is a stigma that BI work is data dumping. In reality, good BI work is rare and extremely valuable. Horrible BI work is commonplace and frequently leads to bad decisions.
Design thinking is very important to good BI work. Analytics reports should be designed in such a way that it facilitates managers making the right decisions. When a report is designed poorly, it causes bad decisions.
***
I just started a new course at Columbia called Applied Analytics Methods and Frameworks. After students did Quiz #1 (worth 5 points out of 100 points in the grading scheme), I have an unusual number of worried students saying they will fail the course. I have taught for over 10 years, and have never encountered something like this.
Eventually, thanks to one persistent student, I found the root cause of the anxiety. It is the terrible design of the "Gradebook" report on our online course website.
The Gradebook is just a spreadsheet with one column for each graded component of the course, and a column at the end of the spreadsheet labeled "Total". Because this is an online website, everything must be in "real time". So after the assignment (Quiz #1), the "Total" column shows the percentage score of the Quiz #1, and it assigns a letter grade to that score, as if this is your projected grade for the course.
Since Quiz #1 is worth 5 points, if a student scored 3/5, the "Total" shows up as 60%, and a letter grade of D is printed next to it.
Students see D and even F in that column, and are immediately demotivated from the first week of the course! Talk about bad design causing emotional harm.
Upon learning this, I sent the following note to my students.
***
Subject: Gradebook Idiocy: An example of terrible analytics report design
It has come to my attention that the way Canvas's gradebook presents the "Total" grade is idiotic, and causing unnecessary consternation.
You should ignore the letter grade that is being printed on the Grade report. This "real time" grade has no meaning.
For example, if you did 80% on Quiz #1, it assumes that you will do 80% on all of your future assignments, and it "projects" that you have a B-.
This "projection" is terrible on several levels:
- each Quiz is different and your percentage score will vary
- the skills needed to do well on Quizzes are very different from skills needed to do well on Projects and Presentations, and therefore, your performance on one Quiz does not give much information about your performance on other assignments
- Projects and Presentations together account for 70% of the total grade. Taking the first quiz and projecting the percentage score as your "Total grade" is equivalent to giving 100% weight on Quiz #1 and 0% weight on everything else.
Since the software does not allow me to suppress this useless column, I'm asking you to ignore it.
Here is how you should think about your total grade: the Project assignments are 60% of your grade and therefore, if you do well on them, you will almost surely do well on the class. You will do well on Class Participation by attending the live sessions. The Presentation assignment tests a completely different skill set from the other assignments. The Quizzes account for a maximum of 20 out of 100 points in the course. Even if you only score 50% on your quizzes, you would have lost 10 points out of 100 points. If you do well on the other components, you can still score up to 90 points out of 100.
This Gradebook is an example of terrible design of an analytics report. After taking this course, you should know how to spot these issues!
Interesting. Is the reason for the idiocy of the report only in that it is oversimplified or that there exists no model/presentation that would be helpful?
That is, if you had historical data about the course and found evidence that students who did poorly on quiz 1 were much more likely to do poorly in the course (in effect, correlation among not only future quizzes but also projects) then couldn't you present, say, "for students who scored 2/5 on the first quiz, here's the distribution of final grades." If 90% of those final grades is failing, is that a useful model? (Maybe the student should drop the class???)
Alternately, do you know from prior experience that quiz performance and project performance are completely uncorrelated? Admittedly, there's not a lot of data in a 5 question quiz, but so little that there's no cause for concern if you score a 1/5 vs. scoring a 5/5?
And, of course, if the class is 90% projects... why have the quizzes at all? Doing well or doing poorly on them doesn't seem like it'd have much bearing on the outcome either way. Does there exist a student who does just poorly enough on projects that gaining the 10% of the grade from quizzes (again, assuming it's plausible that a student who does poorly on the projects can somehow do well on quizzes or vice versa) makes a difference?
Posted by: Adam Schwartz | 06/01/2016 at 03:30 PM
Adam one reason for having quizzes is that they test specific knowledge which may not be evident in projects. They may also provide important feedback before they start the projects.
Posted by: Ken | 06/01/2016 at 06:37 PM
I think you missed opportunity for the next lesson. I'd have sent an email saying the grade report is misleading asking them to think - Why might the grade report be bad?
then spend 5 minutes asking for their thoughts, give them arbitrary points for demonstrating thinking rather that requiring the right answers of course...
Kevin
Posted by: Kevin | 06/02/2016 at 08:39 AM
@Ken, absolutely, I can understand how it'd be useful to the instructor to test specific knowledge to understand if the instructor needs to revise or recover the material, but that doesn't necessitate that the student receive a grade. If the feedback is informational for the instructor, why share it with the students in the form of a grade?
There certainly isn't enough impact from the quizzes on their actual course grade to matter much. I'd argue that either the grade is predictive at some level of future performance OR grading the quiz itself is sort of useless.
Posted by: Adam Schwartz | 06/02/2016 at 09:53 AM
I have this issue in my classes all of the time. I have therefore suppressed the total column in our online system by setting an equation there =0. I think that students have used these online course systems for years - and therefore have never learned how to calculate their own grades.
I have my students create a grade calculator in excel as one of their homework assignments. They can then type in grades as they receive them and have a better understanding of what individual assignment grades mean.
So even though it is designed poorly -- I think the issue is not just that. It is a lack of understanding of some basic math skills.
Posted by: Katie | 06/12/2016 at 11:25 AM
A good example which is very inspiring.
However, I have a little question about this survey. You have written that, this course is the a new course and after the Quiz1 you found that the students are anxious. Is there a possibility, that maybe this kind of anxiety is only temporary, which will not influence the students' final scores?
One little naive opinion. :)
Xiuyang
Posted by: Xiuyang | 06/15/2016 at 10:09 AM
We don't use Canvas at the University of Cincinnati, but I've had this issue in Blackboard and resolved it. According to the documentation for Canvas, the total grade should not calculate for grades not yet completed.
http://elms.umd.edu/sites/elms.umd.edu/files//webfiles/documents/doc/Gradebook.pdf
See page 6-7. "Total" seems to be different from "Final". Is it possible it's the wrong column or a setting is wrong?
Posted by: Jeffrey Shaffer | 08/22/2016 at 04:23 PM