Outsourcing this to Matthew Handy, who wrote an excellent piece on how to think about ranking or league tables. From the Significance blog.
Matthew makes really good points. I'd add the following: the clear sign that this type of rankings is flawed is the inexplicably large year-to-year variability. Things like the quality of a school do not change abruptly. Schools may climb up the rankings slowly over decades but when the rankings show a school going up and down a few ranks every year, what we are watching is the imperfection of the ranking model, not any evidence of real changes in quality. (Technically, we say these changes are not statistically significant.)
So, the only use of these tables is to read them in (very) large chunks, say in quartiles. Imagine if we all do that, the sponsors of such ranking studies would have to find a different line of work!