Chance News 12: Difference between revisions
BillJefferys (talk | contribs) |
(Is the human brain a Bayesian-reasoning machine?) |
||
Line 24: | Line 24: | ||
Submitted by Paul Alper | Submitted by Paul Alper | ||
== | ==Is the human brain a Bayesian-reasoning machine?== | ||
[http://economist.com/science/displaystory.cfm?story_id=5354696&no_na_tran=1 Bayes rules], Jan 5th 2006, The Economist.<br> | |||
The lead article in this weeks Science & Technology section of The Economist claims that Bayesian statistics may help to explain how the mind works and even argues that the human mind is a Bayesian one. | |||
The Economist article begins with a summary of Bayes' ideas: | |||
<blockquote> | |||
[[http://en.wikipedia.org/wiki/Bayes Bayes] ideas] about the prediction of future events from one or two examples were popular for a while, and have never been fundamentally challenged. But they were eventually overwhelmed by those of the [http://en.wikipedia.org/wiki/Frequentist frequentist] school, which developed the methods based on sampling from a large population that now dominate the field and are used to predict things as diverse as the outcomes of elections and preferences for chocolate bars. | |||
</blockquote> | |||
But, Bayes has recently started a comeback, among computer scientists designing software with human-like intelligence, such as internet search engines and automated 'help wizards'. | |||
In many situations, the true answer cannot be determined based on the limited data available, | |||
yet common sense suggests at least a reasonable guess. | |||
For example, | |||
* how much longer will a 60-year old man live? | |||
* can you identify a three-dimensional object from a two-dimensional diagram? | |||
* what is the total gross from a movie that has made $40m at the box-office, so far? | |||
That has prompted some psychologists to ask if the human brain itself might be a Bayesian-reasoning machine. | |||
Accounts of human perception and memory suggest that these systems effectively | |||
approximate optimal statistical inference, correctly combining new data with an accurate | |||
probabilistic model of the environment. | |||
The Economist article suggests that | |||
<blockquote> | |||
The Bayesian capacity to draw strong inferences from sparse data could be crucial to the way the mind perceives the world, plans actions, comprehends and learns language, reasons from correlation to causation, and even understands the goals and beliefs of other minds. | |||
</blockquote> | |||
It goes on to summarises how Bayesian reasoning works | |||
<blockquote> | |||
The key to successful Bayesian reasoning is not in having an extensive, unbiased sample, which is the eternal worry of frequentists, but rather in having an appropriate “prior”, as it is known to the cognoscenti. This prior is an assumption about the way the world works-in essence, a hypothesis about reality-that can be expressed as a mathematical probability distribution of the frequency with which events of a particular magnitude happen. | |||
</blockquote> | |||
It claims that frequentism is thus a more robust approach but it is not well suited to making decisions on the basis of limited information - which is something that people have to do all the time - and this is where Bayesian statistics excels. | |||
The article discusses four prior distributions: Gaussian, Poisson, [http://en.wikipedia.org/wiki/Erlang_distribution Erlang] and [http://en.wikipedia.org/wiki/Power_law power-law] | |||
and [http://web.mit.edu/cocosci/Papers/prediction10.pdf an experiement] that the scientists, | |||
Thomas Griffiths at Brown and Joshua Tenenbaum at MIT, conducted by giving | |||
individual nuggets of information to each of the participants in their study | |||
and asking them to draw a general conclusion. | |||
The experiment found that people could make accurate predictions about the duration or extent of everyday phenomena, | |||
given limited data, such as: | |||
(The authors used publicly available data to identify the true prior distributions shown in brackets.) | |||
* estimate what its total box-office “gross” takings of a movie, even though they were not told for how long it had been on release so far (power-law) | |||
* the number of lines in a poem, given how far into the poem a single line is (power-law) | |||
* the time it takes to bake a cake, given how long it has already been in the oven (a complex and irregular distribution, according to the authors) | |||
* the total length of the term that would be served by an American congressman, given how long he has already been in the House of Representatives (Erlang) | |||
* an individual's lifespan given his current age (approx Gaussian) | |||
* the run-time of a film (approx Gaussian) | |||
* the amount of time spent on hold in a telephone queuing system (traditionally a Poisson but the experiment's results suggests a power-law distribution which matches other recent research) | |||
* reigns of Pharaohs (approx Erlang) | |||
Accounts of human perception and memory suggest that these systems effectively | |||
approximate optimal statistical inference, correctly combining new data with an accurate | |||
probabilistic model of the environment. | |||
People’s prediction functions took on very different shapes in domains characterized by | |||
Gaussian, power-law, or Erlang priors, just as expected under the ideal Bayesian analysis. | |||
There were exceptions, such as an inability of the human brain to estimate the length of the reign of an Egyptian Pharaoh in the fourth millennium BC. | |||
People consistently overestimated this. | |||
The analysis showed that the prior they were applying was an Erlang distribution, which was the correct type. | |||
They just got the parameters wrong, | |||
presumably through lack of knowledge of political and medical conditions in fourth-millennium BC Egypt. | |||
The authors claim that | |||
<blockquote> | |||
everyday cognitive judgments follow the same optimal statistical principles as perception and memory | |||
[which are often explained as optimal statistical inferences, informed by accurate prior probabilities], | |||
and reveal a close correspondence between people’s implicit probabilistic | |||
models and the statistics of the world. | |||
</blockquote> | |||
How the priors are themselves constructed in the mind has yet to be investigated in detail. | |||
Obviously they are learned by experience, but the exact process is not properly understood. | |||
The Economist article finishes with a cautionary note for both Bayesians and frequentists | |||
<blockquote> | |||
Things dont always go smoothly with a Bayesian approach. | |||
Sometimes the process goes further and further off-track and the authors speculate | |||
that that might explain the emergence of superstitious behaviour, with an accidental correlation or two being misinterpreted by the brain as causal. A frequentist way of doing things would reduce the risk of that happening. But by the time the frequentist had enough data to draw a conclusion, he might already be dead. | |||
</blockquote> | |||
===Further reading=== | |||
* [http://economist.com/science/displaystory.cfm?story_id=5354696&no_na_tran=1 Bayes rules], Jan 5th 2006, The Economist. - the full article is worth reading. | |||
* [http://web.mit.edu/cocosci/Papers/prediction10.pdf Optimal predictions in everyday cognition], Thomas L. Griffths, Department of Cognitive and Linguistic Sciences, Brown University & Joshua B. Tenenbaum, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology. | |||
** The paper shows the emprical distributions for the each of the variables being estimated along with more details about the experiment. | |||
Submitted by John Gavin. | |||
==item3== | ==item3== | ||
==item4== | ==item4== |
Revision as of 11:17, 11 January 2006
Screening
Screening
From the doctors' perspective, early detection has other appealing features: ordering a test is quick and easy, and it has an established billing process--unlike health promotion counseling.
--H. Gilbert Welch
For a related story, see this page.
One thing almost all people know is that it is prudent to be screened for diseases because that will add to their longevity. However, according to H. Gilbert Welch, a medical doctor at Dartmouth College, it isn't necessarily so.
His book, Should I Be Tested For Cancer? Maybe Not And Here's Why [University of California Press, 2004], focuses on screening which is a particular form of testing and he deals exclusively with cancer as opposed to other afflictions. Screening "means the systematic examination of asymptomatic people to detect and treat disease." His contention is that screening for cancer is inefficient in that very few people who actually have the particular cancer are both discovered and then cured. Moreover, the false positives result in many problems of which the general public is not aware. On the other hand, false negatives of cancer screening are barely mentioned in his book "because we do not biopsy people with negative screening tests." That is, we can't distinguish between a false negative and a rapidly-growing cancer that emerges in between screenings.
In a nutshell, randomized clinical screening trials for those cancers discussed in the book--lung cancer, cervical cancer, breast cancer, prostate cancer and colon cancer-- have statistically shown that screening has provided very little benefit in terms of mortality. Welch argues that with the new, exquisite devices such as CAT scans, MRIs, etc., now available, it is possible to detect cancer earlier so that it seems that the 5-year survival rates have improved; victims are living longer not because the treatments are better but only because the diagnoses were made earlier. Further, these devices are detecting what he calls "pseudodiseases," cancers which will never develop into a cancer that will cause a problem. It follows that this detection of cancers which would never have been discovered years ago when there was a lack of technology, further inflates the 5-year survival rate, a figure of merit which he would like to see abolished because it is so misleading.
He argues that the side effects of a false positive are not to be taken lightly. Chapters 2 and 3 are entitled "You may have a cancer 'scare' and face an endless cycle of testing" and "You may receive unnecessary treatment," respectively. Certainly, in bygone days being told that you had cancer was frightening in the extreme. Perhaps not so much in these enlightened times, but a stay in a hospital, especially for an unnecessary procedure, can definitely lead to unpleasant side effects such as infection or worse.
Welch points out that there are vested interests in the screening industry: doctors, hospitals, clinics, insurance companies and lay organizations which depend for their existence, financial and otherwise, on keeping Americans fully screened and uninformed about the problems connected with screening. For example, although it has been statistically shown via randomized clinical screening trials that mammography, an unpleasant procedure at best, is not useful for women under 50, the "mammography lobby," made up of manufacturers, radiologists, ideologues and feminists who considered the studies to be a male plot, went ballistic and wanted to substitute emotion for science: The National Cancer Institute reconsidered and by 17 to 1 decided "in favor of recommending mammography to all women in their 40s."
The same sort of situation applies to prostate cancer. The accepted, conventional wisdom in the United States is that screening must be worthwhile because it is self-evident even though a careful look at the data points in the opposite direction. Watchful waiting, a much used medical treatment in Europe for prostate cancer is frequently ridiculed in this country by both laymen and urologists.
Welch fully realizes his thesis--screening for most cancers is, by and large, ineffective and/or harmful--will not go over well because it "flies in the face of medical dogma." His "book is not about what to do if you know you have cancer; it is about informing the decision of whether to look for cancer when you are well." This distinction has been lost on the people I have spoken to. The conventional wisdom that cancer screening must be desirable is a notion that, as far as I can tell from my experience when discussing it with others, is unchallengeable. To be even more cynical, any doctor who doesn't order a screening test for a patient who eventually gets cancer is likely to be sued successfully, so ingrained is the conventional wisdom among the general public and judges alike.
Submitted by Paul Alper
Is the human brain a Bayesian-reasoning machine?
Bayes rules, Jan 5th 2006, The Economist.
The lead article in this weeks Science & Technology section of The Economist claims that Bayesian statistics may help to explain how the mind works and even argues that the human mind is a Bayesian one.
The Economist article begins with a summary of Bayes' ideas:
[Bayes ideas] about the prediction of future events from one or two examples were popular for a while, and have never been fundamentally challenged. But they were eventually overwhelmed by those of the frequentist school, which developed the methods based on sampling from a large population that now dominate the field and are used to predict things as diverse as the outcomes of elections and preferences for chocolate bars.
But, Bayes has recently started a comeback, among computer scientists designing software with human-like intelligence, such as internet search engines and automated 'help wizards'. In many situations, the true answer cannot be determined based on the limited data available, yet common sense suggests at least a reasonable guess. For example,
- how much longer will a 60-year old man live?
- can you identify a three-dimensional object from a two-dimensional diagram?
- what is the total gross from a movie that has made $40m at the box-office, so far?
That has prompted some psychologists to ask if the human brain itself might be a Bayesian-reasoning machine. Accounts of human perception and memory suggest that these systems effectively approximate optimal statistical inference, correctly combining new data with an accurate probabilistic model of the environment. The Economist article suggests that
The Bayesian capacity to draw strong inferences from sparse data could be crucial to the way the mind perceives the world, plans actions, comprehends and learns language, reasons from correlation to causation, and even understands the goals and beliefs of other minds.
It goes on to summarises how Bayesian reasoning works
The key to successful Bayesian reasoning is not in having an extensive, unbiased sample, which is the eternal worry of frequentists, but rather in having an appropriate “prior”, as it is known to the cognoscenti. This prior is an assumption about the way the world works-in essence, a hypothesis about reality-that can be expressed as a mathematical probability distribution of the frequency with which events of a particular magnitude happen.
It claims that frequentism is thus a more robust approach but it is not well suited to making decisions on the basis of limited information - which is something that people have to do all the time - and this is where Bayesian statistics excels.
The article discusses four prior distributions: Gaussian, Poisson, Erlang and power-law and an experiement that the scientists, Thomas Griffiths at Brown and Joshua Tenenbaum at MIT, conducted by giving individual nuggets of information to each of the participants in their study and asking them to draw a general conclusion.
The experiment found that people could make accurate predictions about the duration or extent of everyday phenomena, given limited data, such as: (The authors used publicly available data to identify the true prior distributions shown in brackets.)
- estimate what its total box-office “gross” takings of a movie, even though they were not told for how long it had been on release so far (power-law)
- the number of lines in a poem, given how far into the poem a single line is (power-law)
- the time it takes to bake a cake, given how long it has already been in the oven (a complex and irregular distribution, according to the authors)
- the total length of the term that would be served by an American congressman, given how long he has already been in the House of Representatives (Erlang)
- an individual's lifespan given his current age (approx Gaussian)
- the run-time of a film (approx Gaussian)
- the amount of time spent on hold in a telephone queuing system (traditionally a Poisson but the experiment's results suggests a power-law distribution which matches other recent research)
- reigns of Pharaohs (approx Erlang)
Accounts of human perception and memory suggest that these systems effectively approximate optimal statistical inference, correctly combining new data with an accurate probabilistic model of the environment. People’s prediction functions took on very different shapes in domains characterized by Gaussian, power-law, or Erlang priors, just as expected under the ideal Bayesian analysis.
There were exceptions, such as an inability of the human brain to estimate the length of the reign of an Egyptian Pharaoh in the fourth millennium BC. People consistently overestimated this. The analysis showed that the prior they were applying was an Erlang distribution, which was the correct type. They just got the parameters wrong, presumably through lack of knowledge of political and medical conditions in fourth-millennium BC Egypt.
The authors claim that
everyday cognitive judgments follow the same optimal statistical principles as perception and memory [which are often explained as optimal statistical inferences, informed by accurate prior probabilities], and reveal a close correspondence between people’s implicit probabilistic models and the statistics of the world.
How the priors are themselves constructed in the mind has yet to be investigated in detail. Obviously they are learned by experience, but the exact process is not properly understood. The Economist article finishes with a cautionary note for both Bayesians and frequentists
Things dont always go smoothly with a Bayesian approach. Sometimes the process goes further and further off-track and the authors speculate that that might explain the emergence of superstitious behaviour, with an accidental correlation or two being misinterpreted by the brain as causal. A frequentist way of doing things would reduce the risk of that happening. But by the time the frequentist had enough data to draw a conclusion, he might already be dead.
Further reading
- Bayes rules, Jan 5th 2006, The Economist. - the full article is worth reading.
- Optimal predictions in everyday cognition, Thomas L. Griffths, Department of Cognitive and Linguistic Sciences, Brown University & Joshua B. Tenenbaum, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.
- The paper shows the emprical distributions for the each of the variables being estimated along with more details about the experiment.
Submitted by John Gavin.