Sandbox: Difference between revisions
Line 3: | Line 3: | ||
by Tom Siegfried, ''Science News'', 27 March 2010 | by Tom Siegfried, ''Science News'', 27 March 2010 | ||
This is a provocative essay on the limitations of significance testing in scientific research. The main themes are that it is easy to do the such tests incorrectly, and, even when they are done correctly, they are subject to widespread misinterpretation. | This is a long and provocative essay on the limitations of significance testing in scientific research. The main themes are that it is easy to do the such tests incorrectly, and, even when they are done correctly, they are subject to widespread misinterpretation. | ||
For example, the article cites the following misinterpretation of significance at the 5% level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.” Indeed, versions of this are all too often seen in print! | For example, the article cites the following misinterpretation of significance at the 5% level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.” Indeed, versions of this are all too often seen in print! | ||
Also discussed are the | Also discussed are the multiple comparisons problem, the challenges of interpreting meta-analyses, and the disagreements between frequentists and Bayesians. | ||
Submitted by Bill Peterson, based on a suggestion from Scott Pardee | Submitted by Bill Peterson, based on a suggestion from Scott Pardee |
Revision as of 01:13, 4 May 2010
Odds are, it's wrong
Odds are, it's wrong
by Tom Siegfried, Science News, 27 March 2010
This is a long and provocative essay on the limitations of significance testing in scientific research. The main themes are that it is easy to do the such tests incorrectly, and, even when they are done correctly, they are subject to widespread misinterpretation.
For example, the article cites the following misinterpretation of significance at the 5% level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.” Indeed, versions of this are all too often seen in print!
Also discussed are the multiple comparisons problem, the challenges of interpreting meta-analyses, and the disagreements between frequentists and Bayesians.
Submitted by Bill Peterson, based on a suggestion from Scott Pardee
Discussion Questions
1. [suggested by Bill Jefferys] Box 2, paragraph 1 of the article states "Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct." Why is this statement wrong?