The March 28 issue of *Bloomberg Businessweek* has a rather good summary of the problems of p-values, even recommending the use of confidence intervals and — wonder of wonders — “[looking] at the evidence as a whole.” What, statistics can’t make our decisions for us? 🙂

It does make some vague and sometimes puzzling statements, but for the p-values issue to actually find its way into such a nontechnical, mainstream publication as this one is pretty darn remarkable. Thank you, ASA!

The article, “Lies, Damned Lies and More Statistics,” is on page 12. Unfortunately, I can’t find it online.

In my previous posts on the p-value issue, I took issue with the significance test orientation of the R language. I hope articles like this will push the R Core Team in the right direction.

### Like this:

Like Loading...

*Related*

I’m not sure if is the same as the online article published last week. Here is the link http://www.bloombergview.com/articles/2016-03-21/the-value-of-that-p-value

Yes, that’s it. Thanks very much.

any link to Bloomberg article I couldn’t find it

I couldn’t find it either. Am hoping someone might.

I have read your posts and am a little confused as to why you advocate for confidence intervals. Like the p-value, the definition of a confidence interval does not make sense to many people (even statisticians). If I repeat this experiment, say 100 times, then approximately 95 of the intervals obtained will contain the fixed but unknown parameter–huh? In my field, and I am sure most fields, we almost never exactly repeat an experiment. Further, the definition of a CI says nothing of the interval we actually observed, other than the probability the parameter is within the bounds of the interval is 0 or 1.

So can you clarify why using confidence intervals are more informative than p-values. If uncertainty is what we are after, then shouldn’t we be using Bayesian credible intervals that actually measure our uncertainty?

I’m not a Bayesian, so the last question doesn’t apply. But I’ll address the rest.

Non-statisticians are VERY comfortable with confidence intervals. Just look at the reports of the election polls on the TV news, with the report of margin of error.

Of course you are not repeating the experiment. The probability interpretation comes from thinking of what would happen IF you were to repeat it. Or, you can think of all the confidence intervals you form over the course of your career; 95% of them will contain the true population value being estimated.

Do you have trouble accepting the statement, “On my NEXT flip of this coin, the probability of heads is 50%”? You think that is fine, right? But there will be only ONE next time, not repeatable. It’s no different from the CI situation.