On the interpretability of log-scaled charts

A previous post featured the following chart showing stock returns over time:

Gelman_overnightreturns_tsla

Unbeknownst to readers,  the chart plots one thing but labels it something else.

The designer of the chart explains how to read the chart in a separate note, which I included in my previous post (link). It's a crucial piece of information. Before reading his explanation, I didn't realize the sleight of hand: he made a chart with one time series, then substituted the y-axis labels with another set of values.

As I explored this design choice further, I realize that it has been widely adopted in a common chart form, without fanfare. I'll get to it in due course.

***

Let's start our journey with as simple a chart as possible. Here is a line chart showing constant growth in the revenues of a small business:

Junkcharts_dollarchart_origvalues

For all the charts in this post, the horizontal axis depicts time (x = 0, 1, 2, ...). To simplify further, I describe discrete time steps although nothing changes if time is treated as continuous.

The vertical scale is in dollars, the original units. It's conventional to modify the scale to units of thousands of dollars, like this:

Junkcharts_dollarchart_thousands

No controversy arises if we treat these two charts as identical. Here I put them onto the same plot, using dual axes, emphasizing the one-to-one correspondence between the two scales.

Junkcharts_dollarchart_dualaxes

We can do the same thing for two time series that are linearly related. The following chart shows constant growth in temperature using both Celsius and Fahrenheit scales:

Junkcharts_tempchart_dualaxes

Here is the chart displaying only the Fahrenheit axis:

Junkcharts_tempchart_fahrenheit

This chart admits two interpretations: (A) it is a chart constructed using F values directly and (B) it is a chart created using C values, after which the axis labels were replaced by F values. Interpretation B implements the sleight of hand of the log-returns plot. The issue I'm wrestling with in this post is the utility of interpretation B.

Before we move to our next stop, let's stipulate that if we are exposed to that Fahrenheit-scaled chart, either interpretation can apply; readers can't tell them apart.

***

Next, we look at the following line chart:

Junkcharts_trendchart_y

Notice the vertical axis uses a log10 scale. We know it's a log scale because the equally-spaced tickmarks represent different jumps in value: the first jump is from 1 to 10, the next jump is from 10, not to 20, but to 100.

Just like before, I make a dual-axes version of the chart, putting the log Y values on the left axis, and the original Y values on the right axis.

Junkcharts_trendchart_dualaxes
By convention, we often print the original values as the axis labels of a log chart. Can you recognize that sleight of hand? We make the chart using the log values, after which we replace the log value labels with the original value labels. We adopt this graphical trick because humans don't think in log units, thus, the log value labels are less "interpretable".

As with the temperature chart, we will attempt to interpret the chart two ways. I've already covered interpretation B. For interpretation A, we regard the line chart as a straightforward plot of the values shown on the right axis (i.e., the original values). Alas, this viewpoint fails for the log chart.

If the original data are plotted directly, the chart should look like this:

Junkcharts_trendchart_y_origvalues

It's not a straight line but a curve.

What have I just shown? That, after using the sleight of hand, we cannot interpret the chart as if it were directly plotting the data expressed in the original scale.

To nail down this idea, we ask a basic question of any chart showing trendlines. What's the rate of change of Y?

Using the transformed log scale (left axis), we find that the rate of change is 1 unit per unit time. Using the original scale, the rate of change from t=1 to t=2 is (100-10)/1 = 90 units per unit time; from t=2 to t=3, it is (1000-100)/1 = 900 units per unit time. Even though the rate of change varies by time step, the log chart using original value labels sends the misleading picture that the rate of change is constant over time (thus a straight line). The decision to substitute the log value labels backfires!

This is one reason why I use log charts sparingly. (I do like them a lot for exploratory analyses, but I avoid using them as presentation graphics.) This issue of interpretation is why I dislike the sleight of hand used to produce those log stock returns charts, even if the designer offers a note of explanation.

Do we gain or lose "interpretability" when we substitute those axis labels?

***

Let's re-examine the dual-axes temperature chart, building on what we just learned.

Junkcharts_tempchart_dualaxes

The above chart suggests that whichever scale (axis) is chosen, we get the same line, with the same steepness. Thus, the rate of change is the same regardless of scale. This turns out to be an illusion.

Using the left axis, the slope of the line is 10 degrees Celsius per unit time. Using the right axis, the slope is 18 degrees Fahrenheit per unit time. 18 F is different from 10 C, thus, the slopes are not really the same! The rate of change of the temperature is given algebraically by the slope, and visually by the steepness of the line. Since two different slopes result in the same line steepness, the visualization conveys a lie.

This situation here is a bit better than that in the log chart. Here, in either scale, the rate of change is constant over time. Differentiating the temperature conversion formula, we find that the slope of the Fahrenheit line is always 9/5*the slope of the Celsius line. So a rate of 10 Celsius per unit time corresponds to 18 Fahrenheit per unit time.

What if the chart is presented with only the Fahrenheit axis labels although it is built using Celsius data? Since readers only see the F labels, the observed slope is in Fahrenheit units. Meanwhile, the chart creator uses Celsius units. This discrepancy is harmless for the temperature chart but it is egregious for the log chart. The underlying reason is the nonlinearity of the log transform - the slope of log Y vs time is not proportional to the slope of Y vs time; in fact, it depends on the value of Y.  

***

The log chart is a sacred cow of scientists, a symbol of our sophistication. Are they as potent as we'd think? In particular, when we put original data values on the log chart, are we making it more intepretable, or less?

 

P.S. I want to tie this discussion back to my Trifecta Checkup framework. The design decision to substitute those axis labels is an example of an act that moves the visual (V) away from the data (D). If the log units were printed, the visual makes sense; when the original units were dropped in, the visual no longer conveys features of the data - the reader must ignore what the eyes are seeing, and focus instead on the brain's perspective.


Logging a sleight of hand

Andrew puts up an interesting chart submitted by one of his readers (link):

Gelman_overnightreturns_tsla

Bruce Knuteson who created this chart is pursuing a theory that there is some fishy going on in the stock markets over night (i.e. between the close of one day and the open of the next day). He split the price data into two interleaving parts: the blue line represents returns overnight and the green line represents returns intraday (from open of one day to the close of the same day). In this example related to Tesla's stock, the overnight "return" is an eyepopping 36850% while the intraday "return" is -46%.

This is an example of an average masking interesting details in the data. One typically looks at the entire sequence of values at once, while this analysis breaks it up into two subsequences. I'll write more about the data analysis at a later point. This post will be purely about the visualization.

***

It turns out that while the chart looks like a standard time series, it isn't. Bruce wrote out the following essential explanation:

Gelman_overnightreturns

The chart can't be interpreted without first reading this note.

The left chart (a) is the standard time-series chart we're thinking about. It plots the relative cumulative percentage change in the value of the investment over time. Imagine one buys $1 of Apple stock on day 1. It shows the cumulative return on day X, expressed as a percent relative to the initial investment amount. As mentioned above, the data series was split into two: the intraday return series (green) is dwarfed by the overnight return series (blue), and is barely visiable hugging the horizontal axis.

Almost without thinking, a graphics designer applies a log transform to the vertical axis. This has the effect of "taming" the extreme values in the blue line. This is the key design change in the middle chart (b). The other change is to switch back to absolute values. The day 1 number is now $1 so the day X number shows the cumulative value of the investment on day X if one started with $1 on day 1.

There's a reason why I emphasized the log transform over the switch to absolute values. That's because the relationship between absolute and relative values here is a linear one. If y(t) is the absolute cumulative value of $1 at time t, then the percent change r(t) = 100(y(t) -1). (Note that y(0) = 1 by definition.)  The shape of the middle chart is primarily conditioned by the log transform.

In the right chart (c), which is the design that Bruce features in all his work, the visual elements of chart (b) are retained while he replaced the vertical axis labels with those from chart (a). In other words, the lines show the cumulative absolute values while the labels show the relative cumulative percent returns.

I left this note on Gelman's blog (corrected a mislabeling of the chart indices):

I'm interested in the the sleight of hand related to the plots, also tying this back to the recent post about log scales. In plot (b) (a) [middle of the panel], he transformed the data to show the cumulative value of the investment assuming one puts $1 in the stock on day 1. He applied a log scale on the vertical axis. This is fine. Then in plot (c) (b), he retained the chart but changed the vertical axis labels so instead of absolute value of the investment, he shows percent changes relative to the initial value.

Why didn't he just plot the relative percent changes? Let y(t) be the absolute values and r(t) = the percent change = 100*(y(t) -1) is a simple linear transformation of y(t). This is where the log transform creates problems! The y(t) series is guaranteed to be positive since hitting y(t) = 0 means the entire investment is lost. However, the r(t) series can hit negative values and also cross over zero many times over time. Thus, log r(t) is inoperable. The problem is using the log transform for data that are not always positive, and the sleight of hand does not fix it!

Just pick any day in which the absolute return fell below $1, e.g. the last day of the plot in which the absolute value of the investment was down to $0.80. In the middle plot (b), the value depicted is ln(0.8) = -0.22. Note that the plot is in log scale, so what is labeled as $1 is really ln(1) = 0. If we instead try to plot the relative percent changes, then the day 1 number should be ln(0) which is undefined while the last number should be ln(-20%) which is also undefined.

This is another example of something umcomfortable about using log scales which I pointed out in this post. It's this idea that when we do log plots, we can freely substitute axis labels which are not directly proportional to the actual labels. It's plotting one thing, and labelling it something else. These labels are then disconnected from the visual encoding. It's against the goal of visualizing data.

 


The message left the visual

The following chart showed up in Princeton Alumni Weekly, in a report about China's population:

Sciam_chinapop_19802020

This chart was one of several that appeared in a related Scientific American article.

The story itself is not surprising. As China develops, its birth rate declines, while the death rate also falls, thus, the population ages. The same story has played out in all advanced economies.

***

From a Trifecta Checkup perspective, this chart suffers from several problems.

The text annotation on the top right suggests what message the authors intended to deliver. Pointing to the group of people aged between 30 and 59 in 2020, they remarked that this large cohort would likely cause "a crisis" when they age. There would be fewer youngsters to support them.

Unfortunately, the data and visual elements of the chart do not align with this message. Instead of looking forward in time, the chart compares the 2020 population pyramid with that from 1980, looking back 40 years. The chart shows an insight from the data, just not the right one.

A major feature of a population pyramid is the split by gender. The trouble is gender isn't part of the story here.

In terms of age groups, the chart treats each subgroup "fairly". As a result, the reader isn't shown which of the 22 subgroups to focus on. There are really 44 subgroups if we count each gender separately, and 88 subgroups if we include the year split.

***

The following redesign traces the "crisis" subgroup (those who were 30-59 in 2020) both backwards and forwards.

Junkcharts_redo_chinapopulationpyramids

The gender split has been removed; here, the columns show the total population. Color is used to focus attention to one cohort as it moves through time.

Notice I switched up the sample times. I pulled the population data for 1990 and 2060 (from this website). The original design used the population data from 1980 instead of 1990. However, this choice is at odds with the message. People who were 30 in 2020 were not yet born in 1980! They started showing up in the 1990 dataset.

At the other end of the "crisis" cohort, the oldest (59 year old in 2020) would have deceased by 2100 as 59+80 = 139. Even the youngest (30 in 2020) would be 110 by 2100 so almost everyone in the pink section of the 2020 chart would have fallen off the right side of the chart by 2100.

These design decisions insert a gap between the visual and the message.

 

 


Aligning the visual and the message

Today's post is about work by Diane Barnhart, who is a product manager at Bloomberg, and is taking Ray Vella's infographics class at NYU. The class is given a chart from the Economist, as well as some data on GDP per capita in selected countries at the regional level. The students are asked to produce data visualization that explores the change in income inequality (as indicated by GDP per capita).

Here is Diane's work:

Diane Barnhart_Rich Get Richer

In this chart, the key measure is the GDP per capita of different regions in Germany relative to the national average GDP. Hamburg, for example, has a GDP per capita that was 80% above the national average in 2000 while Leipzig's GDP per capita was 30% below the national average in 2000. (This metric is a bit of a head scratcher, and forms the basis of the Economist chart.)

***

Diane made several insightful design choices.

The key insight of this graph is also one of the easiest to see. It's the narrowing of the range of possible values. In 2000, the top value is about 90% while the bottom is under -40%, making a range of 130%. In 2020, the range has narrowed to 90%, with the values falling between 60% and -30%. In other words, the gap between rich and poor regions in Germany has reduced over these two decades.

The chosen chart form makes this message come alive.

Diane divided the regions into three groups, mapped to the black, red and yellow colors of the German flag. Black are for those regions that have GDP per capita above the average; yellow for those regions with GDP per capita over 25% below the average.

Instead of applying color to individual lines that trace the GDP metric over time for each region, she divided the area between the lines into three, and painted them. This necessitates a definition of the boundary line between colored areas over time. I gathered that she classified the regions using the latest GDP data (2020) and then traced the GDP trend lines back in time. Other definitions are also possible.

The two-column data table shown on the right provides further details that aren't found in the data visualization. The table is nicely enhanced with colors. They represent an augmentation of the information in the main chart, not a repetition.

All in all, this is a delightful project, and worthy of a top grade!


Anti-encoding

Howie H., sometime contributor to our blog, found this chart in a doctor's office:

WhenToExpectAReturnCall_sm

Howie writes:

Among the multitude of data visualization sins here, I think the worst is that the chart *anti*-encodes the data; the longest wait time has the shortest arc!

While I waited I thought about a redesign.  Obviously a simple bar chart would work.  A properly encoded radial bar could work, or small multiple pie charts.  But I think the design brief here probably calls for a bit of responsible data art, as this is supposed to be an eye-catching poster.

I came up with a sort of bar superimposed on a calendar for reference.  To quickly draft the design it was easier to do small multiples, but maybe all three arrows could be placed on a two-week grid and the labels could be inside the arrows, or something like that.  It’s a very rough draft but I think it points toward a win-win of encoding the actual data while retaining the eye-catching poster-ness that I’m guessing was a design goal.

Here is his sketch:

JunkCharts-redo_howardh_WhenToExpectAReturnCall redesign sm

***

I found a couple of interesting ideas from Howie's re-design.

First, he tried to embody the concept of a week's wait by visual reference to a weekly calendar.

Second, in the third section, he wanted readers to experience "hardship"  by making them wrap their eyes to a second row.

He wanted the chart to be both accurate and eye-catching.

It's a nice attempt that will improve as he fiddles more with it.

***

Based on Howie's ideas, I came up with two sketches myself.

In the first sketch, instead of the arrows, I put numbers into the cells.

Junkcharts_redo_whentoexpectareturncall_1

In the second sketch, I emphasized eye-catching while sacrificing accuracy. It uses a spiral imagery, and I think it does a good job showing the extra pain of a week-long wait. Each trip around the circle represents 24 hours.

Junkcharts_redo_whentoexpectacall_2

The wait time is actually encoded in the traversal of angles, rather than the length of the spiral. I call this creation less than accurate because most readers will assume the spiral length to be the wait time, and thus misread the data.

Which one(s) do you like?


The reckless practice of eyeballing trend lines

MSN showed this chart claiming a huge increase in the number of British children who believe they are born the wrong gender.

Msn_genderdysphoria

The graph has a number of defects, starting with drawing a red line that clearly isn’t the trend in the data.

To find the trend line, we have to draw a line that is closest to the top of every column. The true trend line is closer to the blue line drawn below:

Junkcharts_redo_msngenderdysphoria_1

The red line moves up one unit roughly every three years while the blue line does so every four years.

Notice the dramatic jump in the last column of the chart. The observed trend is not a straight line, and therefore it is not appropriate to force a straight-line model. Instead, it makes more sense to divide the time line into three periods, with different rates of change.

Junkcharts_redo_msngenderdysphoria_2

Most of the growth during this 10 year period occurred in the last year, and one should check the data, and also check to see if any accounting criterion changed that might explain this large unexpected jump.

***

The other curiosity about this chart is the scale of the vertical axis. Nowhere on the chart does it say which metric of gender dysphoria it is depicting. The title suggests they are counting the number of diagnoses but the axis labels that range from one to five point to some other metric.

From the article, we learn that annual number of gender dysphoria diagnoses was about 10,000 in 2021, and that is encoded as 4.5 in the column chart. The sub-header of the chart indicates that the unit is number per 1,000 people. Ten thousand diagnoses divided by the population size of under 18 x 1,000 = 4.5. This implies there were roughly 2.2 million people under 18 in the U.K. in 2021.

But according to these official statistics (link), there were about 13 million people aged 0-18 in just England and Wales in mid-2022, which is not in the right range. From a dataviz perspective, the designer needs to explain what the values on the vertical axes represent. Right now, I have no idea what it means.

***

Using the Trifecta Checkup framework, we say that the question addressed by the chart is clear but there are problems relating to data encoding as well as the trend-line visual.

_trifectacheckup_image


Making major things easy, and minor things hard

A recent issue of Significance magazine carried the following stacked column chart showing how the driver license status of men and women change as they age. The data came from the U.K.

Siginificance_olddrivers_1

Quick question - what percentage of British men in their sixties hold full driver licenses?

***

I was just kidding. Those questions can't be quickly answered on a stacked column chart. That's because you have to find the axis, and then mentally invert the axis.

On that chart, larger values are shown pointing down (green) and also pointing up (blue), and ... well, I don't have words for the yellow. In fact, the yellow segments, showing people without licenses, are possibly the most important category for this report.

In making decisions about visualizing data, it's important to separate out the major things from the minor things.

***

Here is a reimagination of the chart using connected dots:

Junkcharts_redo_significanceolderdrivers

What is hard to do using this chart is to verify that the three proportions add to 100%. What is easy is to read off the proportion for any gender, age and license status subgroup.

It's really quite intricate how these researchers binned the age data. There are bins of size 1, 4, 5 and 10, plus the top group is 85 and above. The way I handled these is to turn everything to 1-year bins. I assume that in the wider bins, we don't have precise data for each age, and the bin value is the average among the bin, thus it is as if someone had drawn a horizontal line across the bin width. (I left the top bin alone as I don't know what is the maximum age of a person in this study.)

***

Those of you who have laminated the flowchart of data visualization are probably irate. According to such a flowchart, one must use a column chart because the x variable (age band) has irregularly-sized discrete values, and one must use a stacked column chart because the y variable is a percentage, grouped by a third variable (license status).

Don't be mad, just ditch the flowchart.

 


Deliberately obstructing chart elements as a plot point

Bbc_globalwarming_ridgeplot smThese "ridge plots" have become quite popular in recent times. The following example, from this BBC report (link), shows the change in global air temperatures over time.

***

This chart is in reality a panel of probability density plots, one for each year of the dataset. The years are arranged with the oldest at the top and the most recent at the bottom. You take those plots and squeeze every ounce of the space out, so that each chart overlaps massively with the ones above it.

The plot at the bottom is the only one that can be seen unobstructed.

Overplotting chart elements, deliberately obstructing them, doesn't sound useful. Is there something gained for what's lost?

***

The appeal of the ridge plot is the metaphor of ridges, or crests if you see ocean waves. What do these features signify?

The legend at the bottom of the chart gives a hint.

The main metric used to describe global warming is the amount of excess temperature, defined as the temperature relative to a historical average, set as the average temperature during the pre-industrial age. In recent years, the average global temperature is about 1.5 degrees Celsius above the reference level.

One might think that the higher the peak in a given plot, the higher the excess temperature. Not so. The heights of those peaks do not indicate temperatures.

What's the scale of the vertical axis? The labels suggest years, but that's a distractor also. If we consider the panel of non-overlapping probability density charts, the vertical axis should show probability density. In such a panel, the year labels should go to the titles of individual plots. On the ridge plot, the density axes are sacrificed, while the year labels are shifted to the vertical axis.

Admittedly, probability density is not an intuitive concept, so not much is lost by its omission.

The legend appears to suggest that the vertical scale is expressed in number of days so that in any given year, the peak of the curve occurs where the most likely excess temperature is found. But the amount of excess is read from the horizontal axis, not the vertical axis - it is encoded as a displacement in location horizontally away from the historical average. In other words, the height of the peak still doesn't correlate with the magnitude of the excess temperature.

The following set of probability density curves (with made-up data) each has the same average excess temperature of 1.5 degrees. Going from top to bottom, the variability of the excess temperatures increases. The height of the peak decreases accordingly because in a density plot, we require the total area under the curve to be fixed. Thus, the higher the peak, the lower the daily variability of the excess temperature.

Kfung_pdf_variances

A problem with this ridge plot is that it draws our attention to the heights of the peaks, which provide information about a secondary metric.

If we want to find the story that the amount of excess temperature has been increasing over time, we would have to trace a curve through the ridges, which strangely enough is a line that moves top to bottom, initially somewhat vertically, then moving sideways to the right. In a more conventional chart, the line that shows growth over time moves from bottom left to top right.

***

The BBC article (link) features several charts. The first one shows how the average excess temperature trends year to year. This is a simple column chart. By supplementing the column chart with the ridge plot, I assume that the designer wants to tell readers that the average annual excess temperature masks daily variability. Therefore, each annual average has been disaggregated into 366 daily averages.

In the column chart, the annual average is compared to the historical average of 50 years. In the ridge plot, the daily average is compared to ... the same historical average of 50 years. That's what the reference line labeled pre-industrial average is saying to me.

It makes more sense to compare the 366 daily averages to 366 daily averages from those 50 years.

But now I've ruined the dataviz because in each probability density plot, there are 366 different reference points. But not really. We just have to think a little more abstractly. These 366 different temperatures are all mapped to the number zero, after adjustment. Thus, they all coincide at the same location on the horizontal axis.

(It's possible that they actually used 366 daily averages as references to construct the ridge plot. I'm guessing not but feel free to comment if you know how these values are computed.)


Organizing time-stamped data

In a previous post, I looked at the Economist chart about Elon Musk's tweeting compulsion. It's chart that contains lots of data, every tweet included, but one can't tell the number or frequency of tweets.

In today's post, I'll walk through a couple of sketches of other charts. I was able to find a dataset on Github that does not cover the same period of time but it's good enough for illustration purposes.

As discussed previously, I took cues from the Economist chart, in particular that the hours of the day should be divided up into four equal-width periods. One thing Musk is known for is tweeting at any hour of the day.

Junkcharts_redo_musktweets_columnsbyhourgroup

This is a small-multiples arrangement of column charts. Each column chart represents the tweets that were posted during a six-hour window, across all days in the dataset. A column covers half a year of tweets. We note that there were more tweets in the afternoon hours as he started tweeting more. In the first half of 2022, he sent roughly 750 tweets between 7 pm and midnight.

***

In this next sketch, I used a small-multiples of line charts. Each line chart represents tweets posted during a six-hour window, as before. Instead of counting how many tweets, here I "smoothed" the daily tweet count, so that each number is an average daily tweet count, with the average computed based on a rolling time window.

Junkcharts_redo_musktweets_sidebysidelines

 

***

Finally, let's cover a few details only people who make charts would care about. The time of day variable only makes sense if all times are expressed as "local time", i.e. the time at the location where Musk was tweeting from. This knowledge is not necessary to make a chart but it is essential to make the chart interpretable. A statement like Musk tweets a lot around midnight assumes that it was midnight where he was when he sent each tweet.

Since we don't have his travel schedule, we will definitely be wrong. In my charts, I assumed he is in the Pacific time zone, and never tweeted anywhere outside that time zone.

(Food for thought: the server that posts tweets certainly had the record of the time and time zone for each tweet. Typically, databases store these time stamps standardized to one time zone - call it Greenwich Mean Time. If you have all time stamps expressed in GMT, is it now possible to make a statement about midnight tweeting? Does standardizing to one time zone solve this problem?)

In addition, I suspect that there may be problems with the function used to compute those rolling sums and averages, so take the actual numbers on those sketches with a grain of salt. Specifically, it's hard to tell on any of these charts but Musk did not tweet every single day so there are lots of holes in the time series.


Don't show everything

There are many examples where one should not show everything when visualizing data.

A long-time reader sent me this chart from the Economist, published around Thanksgiving last year:

Economist_musk

It's a scatter plot with each dot representing a single tweet by Elon Musk against a grid of years (on the horizontal axis) and time of day (on the vertical axis).

The easy messages to pick up include:

  • the increase in frequency of tweets over the years
  • especially, the jump in density after Musk bought Twitter in late 2022 (there is also a less obvious level up around 2018)
  • the almost continuous tweeting throughout 24 hours.

By contrast, it's hard if not impossible to learn the following:

  • how many tweets did he make on average or in total per year, per day, per hour?
  • the density of tweets for any single period of time (i.e., a reference for everything else)
  • the growth rate over time, especially the magnitude of the jumps

The paradox: a chart that is data-dense but information-poor.

***

The designer added gridlines and axis labels to help structure our reading. Specifically, we're cued to separate the 24 hours into four 6-hour chunks. We're also expected to divide the years into two groups (pre- and post- the Musk acquisition), and secondarily, into one-year intervals.

If we accept this analytical frame, then we can divide time into these boxes, and then compute summary statistics within each box, and present those values.  I'm working on some concepts, will show them next time.