Reader Bill anticipated my next post, which is to use small multiples to explain the challenges of using relative scales. Zbicyclist's point 1 is absolutely correct in the sense that the "model" uses the US$ as a standard, and only establishes "relative values". That is true but does address the question I posed, which is: under this "model", a researcher cannot conclude that the US$ is over- or under-valued; in other words, it always must have the correct valuation. Now, just like in other economic models, the theorist does not state explicitly the assumption that US$ cannot have over- or under-valuation. It is just that the assumption of the US$ as a standard necessarily leads to the result that it is always correctly valued.
If we choose another currency, say the Euro, as the standard, then under that theory, the US$ can have over- or under-valuation. However, now the Euro has been assumed to have the correct valuation.
The following charts show over (black) and under (white) valuation with different currencies as the standard:
This is a pretty complicated issue. The problem is that there are no external metric to measure value.
Note that I have revised my recommendation on what scale to use for the value axis based on zbicyclist's comments. Since this chosen scale cannot attain values below -1, we should treat -1 as minimum value and use that as the left edge. (By the way, my scale multiplied by 100 is the Economist's scale.)
Also food for thought: should such a strange scale, allowing values between -1 and +infinity, be used? Percentage scales often have the characteristic that a 20% increase and a 20% decrease are not merely a difference in direction but also in magnitude. However, it is natural to assume that a 20% increase/decrease is different only in direction so such scales are misleading. What's better?
Reference: "Playthings in the unreal world", Junk Charts.