The New York Times wrote about how the "Big Data" industry is trying to transform education (link). This is amusing and creepy by turns.
All of these may be well-intentioned, but what strikes me is how unscientific the arguments are given in favor of these data-driven methods. You'd expect the same data-driven approach to be used to justify their new solutions but you find almost none of that.
***
For example:
Arizona State’s initial results look promising. Of the more than 2,000 students who took the Knewton-based remedial course this past year, 75 percent completed it, up from an average of 64 percent in recent years.
What does it mean by "completing" the course? Is completion the same as competence? How do we know that the students were comparable from year to year? What is the variability of completion rates from year to year? Were there any changes in the admission rules or criteria for completion in the course? Were there any changes in the contents of the course?
Where is the control group? Andrew Gelman has written a number of times about experimentation in education. It would seem like companies like Knewton should take the lead in this type of evidence-gathering.
***
Elsewhere:
Mr. Lange and his colleagues had found that by the eighth day of class they could predict, with 70 percent accuracy, whether a student would score a “C” or better.
I don't know what the distribution of grades is at this school (Rio Salado) but grade inflation in US colleges has generally moved most if not all grades to "C" or better so I'd consider a 70 percent accuracy predicting "C" or above to be poor. Also, the issue is not whether one can diagnose the cases but whether there is a solution that would improve the underperforming students' grades. That would depend on the reason for underperforming. There will be cases where the students deserve "C" or worse.
Reading the article, I feel like much more deep thinking is needed to figure out why we would want to change in these ways.
***
Change is not always good. I have been teaching a course at NYU for many years. About two or three years ago, the course evaluation form went online. It used to be that I'd dedicate 15 minutes of the last class handing out evaluation forms, leave the classroom, and dedicate a student to collect the forms and drop them in the mail. Now, students are reminded by email towards the end of the semester to fill out an online survey.
Not surprisingly, the number of students responding has plunged. It was almost 100% when it was filled in during class. Now it's rarely above 30%. In order to encourage higher response rates, the emails that go out to students (and faculty) have become more frequent and they start earlier and earlier in the semester. The first email that opens the survey window is now sent not long after the midpoint of the course. As a result, students can comment on the class based on half or two-thirds of the experience.
The nature of responses has also changed. I now see mostly extreme opinions. The people who care to write evaluations either love you or hate you. (The irony is that all students think they deserve an A, a standard they don't apply when evaluating professors.) Students who are in the middle don't bother to give feedback.
It is absolutely true that putting the form online is more efficient, saves class time, and creates a data source for future data mining but the quality of the data has drastically declined.
***
These all go back to the issue of measuring intangible things. It's very difficult to do right. See my related post here.
I see the same with student evaluations on a distance only course I teach. Sometimes excellent students will comment. and then they may say course is too easy. Poor students will usually take issue with anything that they can. Average students usually don't comment.
As to your first example if 70% of students score C or above then predicting that all of the students score C or above will give 70% prediction accuracy.
Posted by: Ken | 07/20/2012 at 03:28 AM
Students' evaluation should not be based on exams only. I think that the true evaluation is the one based on the every day participation of the students basis.
Posted by: business schools in Chicago | 08/17/2012 at 06:04 AM