Talk:Parapsychology

From RationalWiki
Jump to navigation Jump to search
Icon psychic.svg

This Psychic powers related article has been awarded BRONZE status for quality. It's getting there, but could be better with improvement. See RationalWiki:Article rating for more information.

Copperbrain.png

Give the Null Hypothesis a Chance[edit]

My teacher gave us this essay to read, and it was excellent. You can find it here. I'd add it to Further Reading, but I feel uncomfortable without discussing it first. Perhaps we could even expand the article to include the reasons parapsychology is a pseudoscience listed in the essay? Alecwh 12:21, 6 December 2008 (EST)


Is parapsychology pear-shaped? — Unsigned, by: 212.85.6.26 / talk / contribs 16:01, 19 February 2010 (UTC)

This really need work, as it is full of unsubstntiated assertions, ranging from the historical (Example: Rhine was responsible for poor experimental design in parapsychology? That need actual citations demonstrating a) flaws in Rhime research methods) and b)evidence that Rhine's methodological issues if such exist becme widespread in the field in the next six decades; on the face of it unlikely given the attention paid to scetical critiques in the parpsychological community.) The claim psi is pseudoscience really needs addressing as well, by reference to which mode proposed of pseudoscience you are utilising, and so forth. There is immense scope for a discussion of parapsychologys contributions to scientific method (see Richard Wiseman's forthcoming book), meta-ananylsis, and ganzfed studies here. This article need urgent attention.— Unsigned, by: 92.235.153.158 / talk / contribs

The previous comment about need for more sources and detail to substantiate allegation was by me. I wanted to be able to sign my comment so I have registered. I'll help with the article if you want, though I'm active in the filed of parapsychology myself. ~cj.23
Sure. If you have better references please add them. :) Scarlet A.pngbomination 19:03, 12 December 2010 (UTC)

Interesting paper[edit]

This paper is a survey of the field as of 1998. It includes useful historical information we should really include about the problems parapsychology has as an aspiring science: that there is no database and it doesn't build on past work, it discards old failed results (because the metholodogy provably sucked) and does entirely new ones based on its idee fixe - David Gerard (talk) 22:51, 25 February 2012 (UTC)

Daryl Bem[edit]

The correct title for reference 38 is "Understanding uncertainty: ESP and the significance of significance." It does not criticize Bem's experimental skills in Feeling the Future, but uses that research as an example to argue that when an effect size is very small (as they often are) standard significance tests-frequentist or Bayesian-are not adequate. This is an area of active debate in psychology and statistics. Further, it takes more than criticism of one paper to impugn the skills of a distinguished scientist. Full disclosure: Daryl Bem chaired my thesis committee at Cornell.

Gcolvin (talk) 22:28, 23 February 2015 (UTC)Gcolvin

David Gerard undid my edit, saying "yes, yes, you like him, but his experimental design here was really obviously terrible".

Our policy is: "Articles about people who are still alive. Such articles should be handled carefully and well-referenced. The Wikipedia policy on bios of living people is a good start on how to be sensible. Reference contentious allegations solidly."

The article referenced does not support any claims against Bem's design skills, so the reference is far from solid.

We contend that Daryl Bem's experimental skills are poor. That is defamatory. And it would be difficult to argue that we are right, as it goes against the professional judgment of the editors and peer reviewers of his many papers over his long career, and the hiring committees of Harvard, Stanford, and Cornell. Defamatory and false means libelous.

Neither is the design of Feeling the Future "really obviously terrible." It wasn't obvious to the reviewers and editors at JPSP, who are much better qualified than we are. The design hasn't been criticized much at all-and has even been praised by some critics-as it is taken from the standard design of the corresponding non-time-reversed experiments. Criticism has mostly centered on the conduct of the experiments and analysis of their results, and has mostly been highly technical and inconclusive.

I will wait a day or so and redo my edit, unless we arrive at a different conclusion.

Gcolvin (talk) 02:11, 26 February 2015 (UTC)

"The article referenced does not support any claims against Bem's design skills, so the reference is far from solid." How did you come to that conclusion? To me it reads like it does exactly that because it questions Bem's ways of understanding, choosing, calculating and interpreting probability and that is part of the research design. You even write that "[c]riticism has mostly centered on the conduct of the experiments", which sounds exactly like an acknowledgement of flawed research design (or possibly an inability to adhere to a good design, which is hardly better) - just not the experimental bit (depending on how narrowly you define "experimental", i.e. whether it includes the tools you choose to interpret your experiments) which is why I've changed the wording slightly.
Just because it got published doesn't mean that it's set in stone, or even necessarily good. Given that Bem's results have apparently not been replicated by anyone and heavily criticised, I really don't think that "highly technical and inconclusive" is a fair description of the criticism.
Still, I decided to err on the side of caution and remove Bem's name. This way RW just refers to that one article of his (obviating the problem of someone dragging up other Bem articles and saying "See? Bem has good research design skills!") and given the criticism of that Bem article already cited in RW's entry on it, I hope this will appease the lawyers. ScepticWombat (talk) 03:56, 26 February 2015 (UTC)
Thank you, SkepticalWombat, for removing the disparagement of my friend.
I can add a summary of and link to our Feeling the Future page to this one.
McConway is (and says he is) specifically criticizing significance testing itself. Not design, not conduct, not Bem, but the standard statistical analyses that are done after the experiments. (This link might serve better on our Feeling the Future page as an accessible introduction to the relevant statistics.)
Nothing in science is ever set in stone. Feeling the Future is controversial in part because the skeptical, expert reviewers at a major journal judged that the design, conduct, and analysis of some experiments showing psi are correct. In my reading of the back-and forth debate there is little criticism of design, some of conduct, and much debate about the analysis. How could debates about the correct choice of prior probabilities for a Bayesian analysis not be highly technical? I say inconclusive because it hasn't come to a conclusion. It's an ongoing debate that needs to be settled via replication.
There have been many attempted replications. Bem has attempted to find all of them for meta-analysis:
Feeling the Future: A Meta-analysis of 90 Experiments on the Anomalous Anticipation of Random Future Events.
Gcolvin (talk) 07:49, 26 February 2015 (UTC)
I haven't "remov[ed] the disparagement of [your] friend." I've simply tried to hedged against a possible libel suit by pointing to the article, rather than the person; anyone engaging in ESP "research" don't need my help to be disparaged.
What McConway is effectively writing is that Bem is engaging in statistical testing without realising what he's actually testing by using these tools. This is to me part of a (flawed) research design, because you have to realise exactly what your statistics are measuring. Research design is not just about how you set up your experiments or otherwise gather your data, but also your choice of tools for interpretation and the assumptions behind all of these elements.
You write that "[t]here have been many attempted replications", but the key word here is attempted, rather than successful. Not to mention that I suspect a couple of well-known factors may have been involved in the publication of the initial article, namely the "controversy sells"- and "positive results are better than falsification"-biases (the latter is well-known from medical journals where articles about the lack of effect of something are on average harder to get published). So no, I'm not at all impressed at this one peer reviewed article.
Anyway, I suspect ESP and parapsychology "researchers" probably left the territory of serious (or actual) research the moment they decided that "Hey, maybe there's something to this woo, despite a total lack of evidence and no mechanism for how it works!". ESP belongs in same the trash can as subtle energy, homeopathy and similar long debunked ideas (the fact that homeopathy is still being "researched" is a great parallel to parapsychological "research"). I find it depressing that people can actually get funds to waste on such frivolity when the money could have been spent on actual research.
To quote McConway's conclusion: "If there is really some kind of ESP effect, it is not large (otherwise it would have been discovered years ago). So any such effect must be relatively small and thus not straightforward to observe against the inevitable variability between individual experimental participants." In other words: If(!) there even is a thing such as ESP, it makes no practical, discernible difference over pure chance. I might as well claim small effects of woo goblins and then just do experiments until the end of my life and point to tiny anomalies in data as evidence, while ignoring all the negative results. Why should anyone give a fuck, let alone give me money for this? Essentially, Bem seems to be relying on fucking around with statistical tools, which to me recalled the old adage: "There are lies, damned lies, and statisticsWikipedia". ScepticWombat (talk) 11:28, 26 February 2015 (UTC)
Whatever your motive, ScepticWombat, thank you.
Once again, McConway's arguments apply to all research involving significance testing of small effects. All. He doesn't say such research is impossible, he says it's not straightforward, and that given the small effect sizes it's likely that Bem went wrong that way. He does not say that Bem lacks experimental skills.
That's a funny 200 year old warning about statistics, (Disraeli, says Twain) but do you seriously want to ignore all research that uses statistics? Your opinion seems to be: ESP is impossible woo, therefore Bem is just 'fucking around with statistical tools', and nobody else should give a fuck. Is that RW's opinion? It's not the opinion of the experts who vetted Bem's work for publication. Wrong? Very likely, but still being researched. Irrational? No. Torturing data? No. Ignoring negative results? No. Just fucking around? No.
So why is it important for us to question Bem's skills? That is my primary objection.
Gcolvin (talk) 16:47, 26 February 2015 (UTC)
I repeat: Bem may not lack "experimental skills", but I was referring to the larger research design, and here I think McConway really hammers the study, because he's essentially saying that Bem either doesn't measure what he thinks he measures, or doesn't understand how the statistics he use actually works. Bem's "research" deserves the same ridicule as homeopathy: It's a waste of time and resources on long debunked ideas and neither can even suggest a plausible mechanism for why or how their woo would actually work. The only think elevating ESP slightly over homeopathy is that at least ESP doesn't risk having people rely on useless quackery, rather than potentially life saving actual medicine.
Until anyone actually manage to successfully replicate Bem's study and preferably find solid evidence for ESP, incl. why and how it would actually work, I think it's far more likely that this article is either based on a fluke, or it's an outright lapse in peer review process (they do happen, you know).
I cited the statistics adage because fucking around with (as opposed to the proper use of) statistics is sadly a far from unheard of phenomenon. But please do go full reductio ad absurdum and argument from authority, I'm sure it will make ESP look much more credible... ScepticWombat (talk) 18:40, 26 February 2015 (UTC)
McConway does not claim that Bem's designs are faulty, that he misconducted his experiments, or that he is ignorant of statistics. He claims that the standard statistics used by Bem-and his critics-can't be trusted when effects are small. McConway's argument is not with Bem so much as these standard techniques. There have been at least 90 attempted replications of Bem's experiments, both succesful abd unsuccesful (see the link above or on our Feeling the Future page.) As for arguing from authority? I don't think we should be substituting our opinions and arguments for those of the peer-reviewed scientific literature. Rather, we should be (and often are) citing the literature and letting it speak for itself. That's a simple matter of respect for the scientific method. So why are you so set on this phrase and reference? Is it not enough to say that "tests showing any positive results tend to be poorly designed"?
Gcolvin (talk) 20:42, 26 February 2015 (UTC)
"He claims that the standard statistics used by Bem-and his critics-can't be trusted when effects are small. McConway's argument is not with Bem so much as these standard techniques." In other words, Bem is using the wrong analytical tools for the task he has set himself. I'd say that this is a pretty good definition of a bad research design. It's also a greater problem for Bem than for his critics, because it's Bem who claims that these statistical tools can discern ESP. Pointing to the fact that these are "standard techniques" is not a defence either, since one of the hallmarks of a good researcher is the ability to not simply grab blindly into a standardised methodological tool kit. As McConway put it: "If there is really some kind of ESP effect, it is not large (otherwise it would have been discovered years ago)." This in and of itself suggests a bit more reflection, care, and, dare I say creativity, when crafting a research design.
However, even this leaves the problem with ESP's lack of explanation of how or why it would even work, which, when combined with the general absence of evidence, means that I see no problem in highlighting the extremely controversial nature of one of the few peer reviewed articles claiming to have found such an effect. RW actually does let let the literature "speak for itself" exactly by highlighting these issues. Just saying that "tests showing any positive results tend to be poorly designed" is extremely vague, and pretty pointless when a controversial example can actually be cited.
Of course, once Bem manages to actually convince enough (other) scientists that he has actually discovered ESP and it becomes a scientific consensus, then we can start revising. Until then, ESP remains firmly in the woo bin. ScepticWombat (talk) 21:15, 26 February 2015 (UTC)

Bem would be subject to even more criticism if he invented new statistical techniques rather than using long-established standards. And yes, there are known problems with these techniques, for which the best solution is very large samples. If Bem's designs and stats are bad then so is most of experimental psychology. If they are good they provide a modern tool to better support or refute ESP. Science advances in part by investigating small deviations of observations from current theories. That is what Bem and the many scientists attempting to replicate his findings are up to, with a sample size of over 12,000 people so far. You call it woo, I call it science. Gcolvin (talk) 23:36, 26 February 2015 (UTC)

"An argument from authority, when correctly applied, can be a valid and sometimes essential part of an argument that requests judgement or input from a qualified or expert source." The editors and reviewers at JPSP are qualified experts. They do not claim that Bem is right, just that they judge his research to be sound and worthy of publication. You disagree with them, I disagree with you, and the scientists doing the research probably don't care what we think.

So I suggest that I create a section for Bem on this page that summarizes and links to our page on Feeling the Future, and drop your opinion that "the research design and analytical skills displayed in Feeling the Future" are deficient. That debate is already covered in our page on Feeling the Future, with cites to the relevant papers. I can also move our citation to McConway there. I'm fine with a statement at the end that simply summarizes this pages' conclusion that most positive results in psi have been shown to be spurious. Would that be acceptable to you? If not, how do we resolve this disagreement? Gcolvin (talk) 02:28, 27 February 2015 (UTC)

I suggest you accept that nobody buys your whitewash. Bem's paper was really obviously statistically bogus, even if you can't accept that's possible - David Gerard (talk) 08:17, 27 February 2015 (UTC)
Who said anything about "invent[ing] new statistical techniques rather than using long-established standards"? I was merely pointing out that mindlessly applying "standard techniques" to an area, such as ESP, which is not "standard" is not particularly brilliant. These techniques were not developed to scrutinise ESP, which, as we've already been over, cannot simply be lumped in with "most of experimental psychology", since, unlike the latter, ESP doesn't actually operate within what we usually define as the normal laws of physics, biology etc. While we may not understand all of the details about how the human psyche works, psychology still operates under far fewer general assumptions than parapsychology (incl. ESP), because the latter would have to come up with some mechanism through which ESP could actually work, in addition to the problem that its empirical data for the existence of parapsychological phenomena is far from solid. I suppose you're not actually suggesting that the psychological effects investigated by experimental psychologists tend to have the problem of ESP, namely being hard to distinguish from pure chance? Because that's pretty much the corollary of insisting that "[i]f Bem's designs and stats are bad then so is most of experimental psychology."
My initial edit was simply to avoid statements that might be libellous (I agree with you that avoiding these is just good sense). However, I see no compelling reasons to add extra qualifying sections to this article, since the text links directly to the RW discussion of Bem's article (that's kind of the point of having hyperlinks). Since Bem's article seems to be the best ESP/parapsychology researchers have come up with, citing it and the criticism of it indicates how poor the evidence for ESP (and the rest of parapsychology) is. This RW has already done in Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect and that's why I think a general statement and an RW link is sufficient for this page. Anyone interested in the nitty gritty can go to the relevant entry to read more.
Still, it's a wiki, so knock yourself out, but don't be surprised to see reverts if the resulting text make it seem like RW is suddenly starting to hedge on whether ESP is woo, just because Bem's extremely controversial conclusions made it through peer review and was then met with plenty of criticism. Remember that RW is not WP and doesn't operate under NPOV.
If I had to come up with a way to square the circle, it would be merging RW's entry on Bem's article into parapsychology as a sort of "the best parapsychology has to offer"-section. ScepticWombat (talk) 09:02, 27 February 2015 (UTC)
I see, rather than ("blindly") using the standard experimental and statistical techniques used by psychologists for this kind of experiment, Bem should have used some other techniques. But you don't say what techniques, or how they would be better.
Our Feeling the Future page is longer than any section in Parapsychology, so I don't think merging them is a good idea. Better to create a section on Bem briefly discussing all of his psi work with a reference to Feeling the Future.
Neither am I happy with letting just one editor prevent me from making what I consider to be improvements. Where do we go from here?
Gcolvin (talk) 20:22, 2 March 2015 (UTC)
If you're investigating parapsychology, you can't expect that tools used by psychology are necessarily applicable for the reasons I've already listed and McConway suggested as well.
I agree that a merge is probably not feasible.
And in the case you've missed it: I'm not the only editor to disagree with you. If you check the fossil record, your changes were actually reverted by David Gerard and Tanis, but you could always try to convince others to enlist in your cause. You know, ask for support on other editors' talk pages 'n stuff. ScepticWombat (talk) 20:31, 3 March 2015 (UTC)

As a former student of Dr. Bem I should not be editing this paragraph, so I won't. I still think the last phrase is potential libel and should be removed. Note "potential." It doesn't matter what we think, only what a jury might be convinced of. So thank you Mr. Wombat, for a rational discussion. And thank you Mr. Gerard, for insulting me. Gcolvin (talk) 01:00, 24 April 2015 (UTC)

At least there are one serious paper about psi ...[edit]

In an issue of the Journal of Personality and Social Psychology (JPSP), Cornell psychology professor Daryl Bem has published an article, Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect, the paper presents evidence from nine experiments involving over 1,000 subjects suggesting that events in the future may influence events in the past — a concept known as “retrocausation. Bem is not just any psychologist; he is one of the most prominent psychologists in the world. What think about?— Unsigned, by: Jordicuest / talk / contribs

Show us a replication - David Gerard (talk) 21:16, 11 June 2016 (UTC)
I read the first experiment described, and it seems that the experiment produced an average 53.1% chance to randomly select a sexually explicit image out of two choices, the other being a wall. The image sets contained either 12 or 18 of those sets, so it looks like the subjects averaged 6.37 or 9.55 dirty pictures rather than the expected 6 and 9, respectively. Hertzy (talk) 12:33, 12 June 2016 (UTC)
And to what extent do #expectations of the future# affect #interpretations of the present/past#? 31.51.113.108 (talk) 09:33, 11 July 2017 (UTC)

A key point[edit]

... to remember is that it is an essential part of (most) human nature to think that 'there is more to heaven and earth than in most scientific philosophy.' It is this perception that makes us human and makes us want to explore. 31.51.113.108 (talk) 09:33, 11 July 2017 (UTC)

Sheep-goat effect[edit]

I just saw the page on the sheep-goat effect on The Skeptic's Dictionary, and I'm slightly freaked out about it. Apparently belief in psi does skew the results of experiments about psi one way or the other, though this could easily be chalked up to confirmation bias leading to (subconscious or otherwise) attempts to "prove" the issue one way or the other rather than letting the evidence decide. --Luigifan18 (talk) 15:25, 1 May 2024 (UTC)