The following two charts plot the same data, the yearly amount of rainfall in Los Angeles over the last two decades or so. (The original chart, on the left, came from the LA Times. Link here.) Why do they give such different impressions?
The left chart appears very busy despite the simplest data set, thanks to printing the entire set of 21 numbers, each to the second decimal point on the chart itself. The axis labels do not provide extra information when all the data has been included, and it is highly unlikely any reader of the newspaper requires precise measurements of rainfall.
Chances are the reader is interested in how the general trend of rainfall in recent years compared to the historical pattern. Credit the designer for pulling the relevant data, including the average, maximum and minimum rainfall on record. On the right chart, all three historical numbers are incorporated into the axis so that they could act as reference levels.
Not to mention the axes were switched to preserve the usual placement of time on the horizontal axis.
The bar chart emphasizes the absolute values of each rainfall amount while the dot plot displays the differences between each measurement and the historical average. On the right chart, it is easy to observe whether any year's rainfall is above or below the expectation. Over the last two decades, it appears there were about as many years above as below the average, and the overages and underages do not exhibit any clustering.
From a Trifecta checkup perspective, we find that the choice of data is not attuned to the purpose of the chart. The right data has been collected; a small transformation would have made all the difference. The selection of the chart type also fails to address the purpose of the chart.