Andrew Sullivan said that the Rasmussen poll has been disconcordant with other polls in recent months, and he shows us the graphs (from Pollster.com) to show us. A good example of effective visualization.
Note what makes it work: identical vertical scales on both charts, identical time frames, matched colors (disapproving red, approving black).
Reference: "Rasmussen vs. the rest", Andrew Sullivan blog, Dec 29 2009.
[Update - 1/5/2010: A few constructive comments, including a stern note from "A Professor", sent me scrambling to see if I have been too trusting of the experts. Thankfully, the original source of these charts, Pollster.com, provides interactive tools that can be used to test the suggestions.
It is true that the Gallup poll is a counter-balance to the Rasmussen, biased in the opposite direction, although when one looks at the evidence below, one still has to conclude that the variation between the Rasmussen and all others is much more striking than that between the Gallup and all others. In particular: the disapproval proportion has exceeded the approval since August in the Rasmussen when this pattern is still not completely clear in the aggregate of all other polls by December. (The cross-over appears to be inevitable unless some new policy sways public opinion back to Obama soon.)
On the balance, I'd still consider Sullivan's point to be valid. Then again, I don't consider myself an experts on polls so there could well be other anomalies hiding within the dozens of polls. I am a bit intrigued/disturbed by the fact that the Gallup apparently did not measure disapproval until August, or perhaps there was a glitch in the plotting software.
Of course, any polls or market research interviews can be easily manipulated, via selection of samples, via using leading questions, via the structure of questionnaires, etc. etc. That's why the Pollster-style charts showing us the aggregate trends are crucial to look at.
Cherry-picking is to be frowned upon but sometimes the cherry-picked item is indeed an outlier, and at other times it is not. When an entire group is being taken out, and the underlying dataset is large, as in here, the risk of falsely throwing out good data is smaller but it is always good to be vigilant.
On this last point, I'm again grateful to our vigilant readers for pointing out problems with the initial post.]