In the last post, I linked to the article by Philadelphia Inquirer, disclosing how admissions staff at Temple University gamed rankings. They mentioned specific techniques. Here is an annotated version, with my comments:
Inflating GMAT Scores
The most interesting technique is converting GRE scores to GMAT scores before computing and reporting the average GMAT score. Presumably, those taking GREs are better students because they are applying to MS and PhD programs which require GRE and do not allow GMAT as a substitute. So, by such conversion, the average GMAT score is inflated, according to the rules which require including only those applicants with GMAT scores.
Is converting GRE to GMAT a bad thing? Not necessarily! The average GMAT score only represents the part of the applicant pool that submitted GMAT scores. As discussed above, this group is likely to be less academically brilliant than the other group who submitted GRE scores. (Chances are the average GRE score is separately reported.)
What is making trouble is the proportion of applicants submitting GRE versus GMAT scores. If a school has lots of applicants who are applying to MS/PhD programs and they submit GRE scores instead, then the average GMAT score will likely to be dragged down.
I actually think having a standard conversion formula between the two tests is great. All schools can then be evaluated on one average test score that takes into account the mixture of GREs and GMATs. The question is: is there a standard conversion formula?
Yes, the ETS provides one. This formula is based on the subset of people who have taken both tests. You can then use one score to predict the other score. Here is a PDF that explains the methodology. (By using this formula, we make the assumption that this subset of test-takers can be generalized to the subset of test-takers who took the GRE, did not take the GMAT, and are applying to business schools.)
Under-reporting student debt
The reported average student debt was diluted by including students without debt. The rating body has requested that the metric be computed only for students with debt. Mixing in zeroes brings down the average.
There are two different averages to be considered: the average debt held by students who have debt; and the average debt for all students. If we want to evaluate the school's financial aid policies, the average debt for all students gives a better answer. If you are a prospective student who will be taking out a student loan, the average debt held by students with debt is a better measure of the amount of loans required.
The link between the two metrics is the proportion of students who have debt. If the school's policy is to "spread the wealth", then the average debt load will be lower but a higher proportion of students will have debt.
Rounding up GPAs
It appeared that they crudely rounded up the average GPA. The example was rounding 3.22 to 3.30. The 0.08 increase looks innocent but applied to the average, this means adding 0.08 to everyone's GPA (to be more accurate, for each student whose GPA is 3.92 or higher, someone else's GPA got inflated by more than 0.08).
Under-reporting the number of admission offers
Selectivity is the number of offers divided by the number of applicants. Inflating the number of applicants or deflating the number of offers increases selectivity rate. According to the investigation, they blatantly lied by under-counting the number of offers. In the previous post, I described a number of techniques that are more subtle - you generate more applicants but of the kind that you are unlikely to give out offers to.
For even more tricks, read Chapter 1 of my book Numbersense.
Comments