You can follow this conversation by subscribing to the comment feed for this post.

One thing that is going to need to be taught more is the problems with observational data and the possible solutions. Too much of statistics is taught as we have some data and now we will fit a model, rather than first looking at how the data was generated and how will that affect the fitted model. I once had a data set with responses and doses to a pharmaceutical which someone had the bright idea that we could see how dose affected response. Big problem : dose is not randomised it is determined by the clinician. How does the clinician determine dose: he starts low and increases until required response is achieved. So patients who are sensitive to the drug get low doses but high response, those that are not sensitive get high doses but low response. Fit a model and get that response is reduced with increasing dose. If you have the complete history then an appropriate model will give sensible results, but everything else is just stupid. Expect to see lots of these types of analyses.

One solution may be to use simulation to demonstrate the problems. It is so easy in R.

There are so many other factors that make be unknown in the case you are referring to of student test scores. What is the student's socioeconomic level? Did they eat that day?

We want simple answers and simple correlations to complex problems.

Good medical and drug studies factor in many variables, don't they? You want a group of people to study that are age 40-45, smokers, BMI: 30+ , etc. to trial a new BP

Carolyn, the problem is that even if you use lots of variables you may not have the ones that you need. There are techniques that allow for causal modelling with observational data but they always require knowing enough about the subjects. As an example someone could use a doctors database to look at the effect of prescribing two different drugs. If we know that he tends to prescribe based on presence of various health conditions and that data is available then we can do wonderful things. If however he prescribes based on whether he thinks the patient will comply with the treatment and we don't have that information then we can't do anything. The subjects on one drug will be less compliant, which will likely effect how well the drug appears to work, but we don't know. Use of randomised trials avoids this problem.

Carolyn & Ken: Those are good points. Even randomized trials are difficult to execute well. Observational studies are mostly badly analyzed. There is also the fallacy that you can "control" for lots of variables. You can only control for things you know about, and things that can be measured. In addition, many real life factors are confounded and you can't control for that. Randomized trials though are limited in the types of research questions that can be addressed.

Yes, nutritional epidemiology seems a bit of a mess. Observational studies have the problem that what people eat is often determined by other factors that may have a greater effect on mortality. Clinical trials are either unethical because we can't force people to eat bad food, and there are only so many good foods that you can add, and even when you do, subjects often don't stick to the protocol. Some things we may never know.

### Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

(Name is required. Email address will not be displayed with the comment.)

## NEW BOOTCAMP

See our curriculum, instructors. Apply.
Business analytics and data visualization expert. Author and Speaker. Founder of Principal Analytics Prep, MS Applied Analytics at Columbia. See my full bio.

## Next Events

May: 2 New York Marketing Association Big Data Workshop, NYC

May: 5 NYPL Analytics Careers Talk, NYC

May: 8 Data Visualization Seminar, Denver, CO

May: 15 Data Visualization Seminar, Cambridge, MA

May: 17 Data Visualization Seminar, Philadelphia, PA

May: 22 Data Visualization Seminar, San Ramon, CA

See here

## Future Courses (New York)

Summer: Statistical Reasoning & Numbersense, Principal Analytics Prep (4 weeks)

Summer: Applied Analytics Frameworks & Methods, Columbia (6 weeks)

## Junk Charts Blog

Graphics design by Amanda Lee

## Search3

•  only in Big Data