Maps and dots
Jan 04, 2008
Happy New Year
The cosmos of university ranking got more interesting recently with the advent of the "brain map" by Wired magazine. This new league table counts the total number of winners of five prestigious international prizes (Nobel, Fields, Lasker, Turing, Gairdner) in the past 20 years (up to 2007); and the researcher found that almost all winners were affiliated with American institutions.
As discussed before, the map is a difficult graphical object; it acts like a controlling boss. In this brain map, the concentration of institutions in the North American land mass causes over-crowding, forcing the designer to insert guiding lines drawing our attention in myriad directions. These lines scatter the data asunder, interfering with the primary activity of comparing universities.
The chain of dots object cannot stand by itself without an implicit structure (e.g. rows of 10). This limitation was apparent in the hits and misses chart as well. Sticking fat fingers on paper to count dots is frustrating. Simple bars allow readers to compare relative strength with less effort.
In the junkart version, we ditched the map construct completely, retaining only the east-west axis. [For lack of space (and time), I omitted the US East Coast and Washington-St. Louis.] With this small multiples presentation, one can better contrast institutions.
To help comprehend the row structure, I inserted thin strikes to indicate zero awards. A limitation of the ranking method is also exposed: UC-SF has a strong medical school and not surprisingly, it has received a fair share of Nobel (medicine), Lasker and Gairdner prizes; but zero Lasker and Gairdner could be due to less competitive medical schools or none at all!
Reference: "Mapping Who's Winning the Most Prestigious Prizes in Science and Technology", Wired magazine, Nov 2007.
Sticking fat fingers on paper to count dots is frustrating. Simple bars allow readers to compare relative strength with less effort.
That's not a fair like-for-like comparison of the two objects. "How hard is it to compare relative strength of dot chains?" or "how hard is it to count integer bar length?" should be the test. The answers respectively are "no harder than reading a bar" and "nearly impossible".
The dot chain can be made arbitrarily close in design to a bar; if you don't like the roundness of a dot, then a square or a 2x1 rectangle are attractive alternatives. This makes a slighting comparison of the two objects pointless.
The only regime in which a dot chain really does not make any sense is where the quantity displayed is in real numbers instead of integers, or when the integers are so large they may as well be.
Posted by: derek | Jan 05, 2008 at 04:38 AM
I realize that you are highlighting the organization of the data and not commenting on the data themselves, but I have to point out that this chart seems to be somewhat incomplete. The University of Chicago, with its 50+ Nobel affiliates, is not listed! Being an alum, that was the first one I looked for.
Posted by: Go Maroons! | Jan 05, 2008 at 01:00 PM
In the junkart version because of the orientation of the bars I can't easily compare across the institutions, which seems the main point. Apart from legend difficulties it might work better if the bars were vertical.
I'm not sure what the data is meant to be telling me in any case --- these institutions are different sizes, have different specialities and have different budgets. Surely you'd need to equalize for these factors before comparing them? Institution X may have no Nobel prizes because it didn't win one or because it didn't do any work that would qualify, i.e. it's not their specialism. Without knowing the difference between zero awards and "not entered" the data is difficult to draw conclusions from.
Posted by: RichardJ | Jan 07, 2008 at 04:15 AM
I'm not sure what the data is meant to be telling me in any case
We can try to get a message from the text in Wired:
The state of science education may be declining in the US, but Americans still win most of the world's top science prizes. In the past two decades, [...] US institutions have received at least three of the major awards [...] compared with just six non-US entities. And of the 128 individual accolades given to US faculty, only a quarter were presented to noncitizens. But because the number of science and engineering PhDs handed out in the US has stagnated over the past decade, it doesn't take a Nobel-winning statistician to predict that this map will look very different by 2027s
So the takeaway messages seem to be: go USA!, most prizes going to US citizens, and some words about change. A presentation of the data that supported these messages would have aggregated all the US prizes; the breakdown by institution seems to be excessive detail that doesn't add context. It would also present the citien/non-citizen data (this is present in the map as open circles). Finally the breakdown that would have been useful, instead of institution, would have been a time series, since the article talks about change but does not back up any of its claims in that area (even the remark about "stagnant PhD" is vague and unquantified).
Posted by: derek | Jan 07, 2008 at 05:13 AM