Sandbox: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 10: Line 10:


1.  [suggested by Bill Jefferys] Box 2, paragraph 1 of the article states "Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct." Why is this statement wrong?
1.  [suggested by Bill Jefferys] Box 2, paragraph 1 of the article states "Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct." Why is this statement wrong?
==Tea party graphics==
[http://www.nytimes.com/2010/04/17/opinion/17blow.html?hp A mighty pale tea]<br>
by Charles M. Blow, ''New York Times'', 16 April 2010
This article recounts Blow's experience visiting a Tea Party rally as a self-identified &quot;infiltrator.&quot;  He was interested in assessing the group's diversity.  Reproduced below is a portion of a graphic, entitled [http://www.nytimes.com/imagepages/2010/04/17/opinion/17blowimg.html The many shades of whites], that accompanied the article.
<center>
[[Image:Shades.png]]
</center>
The data are from a recent [http://www.nytimes.com/2010/04/15/us/politics/15poll.html NYT/CBS Poll].
Submitted by Paul Alper

Revision as of 00:56, 4 May 2010

Odds are, it's wrong

Odds are, it's wrong
by Tom Siegfried, Science News, 27 March 2010

This is a provocative essay on the limitations of significance testing in scientific research. The main themes are that it is easy to do the such tests incorrectly, and, even when they are done correctly, they are subject to widespread misinterpretation.

Submitted by Bill Peterson, based on a suggestion from Scott Pardee

Discussion Questions

1. [suggested by Bill Jefferys] Box 2, paragraph 1 of the article states "Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct." Why is this statement wrong?