ESP Study Proves Science Still Works

The fallibilities of science as an institution have frequently been on display lately. I’m hoping to find some time to write about that. But today I want to make note of something else: evidence of science working out well. Specifically, the freshly released extra-sensory perception study and the scientific response to it.

I’m not particularly excited about the media response to the paper. The media’s take is still unfolding and, although I haven’t seen any really egregious coverage yet, based on past experience we can generally anticipate the usual noncommittal half-phrases that emerge from the clumsy coupling of science-agnostic journalists and journalism-indifferent scientists.

But I am kind of excited about the way that science — meaning science as an institutional whole — has dealt with the research. I’m excited for a few reasons.

First of all, somebody decided to do a reasonably rigorous study on an interesting, hugely improbable phenomenon. It’s colossally unlikely that ESP exists in the world, but there is enough general interest in it that somebody might as well take a few weeks to run some actual tests. Not of course iron-clad conclusive tests (such things rarely exist outside of physics and maybe chemistry), but some experiments that are well-formed enough to be at least cautiously suggestive of the outlines of truth within some limited contexts. You know, scientific research.

Highly improbable phenomenon rarely turn out to be true. Single experiments are almost never likely to prove whether or not they’re true, either way. But true paradigm shifts in our understanding of major phenomena do occur often enough that it seems worthwhile to occasionally run some off-the-wall research, as long as the research is usefully competent, and reasonably cheap. Individual experiments may suggest some surprises, and those surprises are likely to eventually get explained as unimportant exceptions. They may alternatively or also induce some usefully novel thinking that breaks up comfortable patterns of observation. So colour me skeptical and maybe bemused, but I can’t work up any actual anger that somebody would do a study on ESP.

Second, the paper was submitted for peer-review. Peer review has flaws, lord lord. But it’s not such a bad procedure all-in. Sort of like a pre-trial: is the evidence here good enough to keep the accused in lock-up and use up the court’s time with a real trial, or should the whole she-bang be tossed off the docket so we can get on to the real deal? Or rather, is the research described plausible enough to merit the symbolic weight of the journal’s logo in the top-left corner? Or is it dubious enough that we shouldn’t subject real working researchers to the onus of having to skim past the title and maybe the abstract the next time they’re scanning the results of their daily keyword search alerts?

The real world uses Digg or Reddit or Facebook or whatever to do their focus-filtering. Science uses the peer-reviewed journal method, and it works at least OK (it also uses Digg or Reddit or arxiv or whatever). Sure prestige, apathy, vengeance, intellectual provincialism and ignorance all have a place in the peer review process, but mostly you get forthright and well considered opinions from people who have a reason to know.

In this case the panel of four peer reviewers were sufficiently convinced of the plausibility of the research to pass it along as worth reading by other people. So it’s probably interesting research. So it’s good we get to hear about it.

Thirdly, smart people read the paper and are trying to figure out how the results could be the way they are. Are there experimental design flaws? Statistical flaws? Could it be that the student-subjects were actually influenced by future events? (Let’s check for statistical flaws again.) Isn’t science amazing — if you do your work well enough, scads of brainiacs will add value by contextualizing and critiquing all the bits. For free! At least, free to you (except for when you pay your taxes). Sometimes that process is collegial, sometimes it gets personal and even ugly. I suspect we’re going to see both in this case. But we’re definitely going to see a lot of clever cats blasting away with both lobes. Fun.

I’m a little saddened by and a little sympathetic to the folks who are outraged that this topic is getting treated seriously at all. And yes, I’m sure that the foil-beany woo woo brigade will be barking about a paper proving ESP in a major journal for internet-years to come. There’s also an argument to be made that, given the utter implausibility of extra-sensory perception, diverting the attention of working researchers and the public towards it for any amount of time is a waste of that time. But whatever. The guy did (apparently) real research. It may or may not have experimental flaws, but if he’s maintained the intellectual respect of his intellectual peer group for this long it’s unlikely that he actively gamed his own system, or deliberately fiddled his numbers afterwards. Occasionally a working researcher with some one-off weirdo reason for kinking their own integrity will slip one past the peer review process (e.g. 1, 2, 3). The process mostly works on a presumption of good faith, and is susceptible on those grounds. But those are very rare events. If the media narrative is to be believed, real researchers really respect Dr. Bem’s considerable research record, so I’m guessing this is good-faith experimentation. In which case: hey, he deserves to present it to the community. Let’s have at it.

Fourth, the experiments are going to be replicated. One place I’m not so proud of science is that, as far as I can tell, this doesn’t really happen. Every highschool student is told that replication is very important to the scientific method. I suspect it almost never happens in practice. Because it’s boring.

What does commonly happen is that people adapt your published experimental premises in somewhat different circumstance, and those variants both produce fresh knowledge and sort-of stand in for replication. Oh, you think you showed that turtle gender is influenced by Great Lakes water chemistry? What about if I try it in frogs in Lake Baikal? This time I think we’re going to get to see straight-up replication. Should be interesting. And it will make all those highschool textbooks true for a moment.

Fifth and finally, even if the results of the study are eventually deemed to not be reflective of the whole truth, they are in this case guaranteed to be at least interestingly wrong. Which is possibly the best kind of research result. Most of the time, interestingly wrong studies will throw some little cul-de-sac of current research consensus into relief, and spark some questions that are interesting to people in one side of one building on campus. This time the questions emerging seem to be more grand, like: what if the common statistical framework used by the discipline of psychology is a) not equivalent to that used in other disciplines and b) not entirely suitable to assessing claims of extraordinary uniqueness (whatever that means)? Wouldn’t that be fun to know! So thanks Dr. Bem for helping us to find out. Even if ESP isn’t true.

Indeed, I would bet my kidneys against there being any actual ESP out there. I mean come on! It just sits way too far outside of the network of forces and facts that I have personally perceived or come to trust in the world. But I’m rather pleased about the reactions so far of the institution of science, an institution I’m rather fond of. That reaction could be going much worse — science doesn’t always deal well with institutional problems that come at it from oblique angles. So far, so good.

4 comments:

It’s a relief to read something that both affirms and develops my understanding of things. I am happy to read this today, in part, because Bem was on Colbert last night. It was a little unfortunate – when asked by Colbert why a result of 53% was significant, Bem compared his 53% results to contexts with much larger numbers – like casinos and federal elections. He might have statistically significant (and sound) results but his conflation of the different magnitudes of numbers turned me off. Actually it made me wonder why any scientist would want to defend, rather than just talk about, their results on Colbert. Anyway, I much appreciate this defense of science and common sense. Ha ha, I am a little disappointed to hear that experiments are rarely repeated! It makes me want to open an institute of repeatability and just commit ourselves full time to doing other peoples experiments.

Sounds good, let’s apply for funding.

While we’re at it, I want to publish a Journal of Negative Results.

Hey, someone has already tried to repeat the test:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1699970
http://www.essays.se/about/Gerg%C3%B6+Hadlaczky/
Tripped over those at an old Wired article on Bem.

Ah ha. I didn’t read either of those (or the Bem paper that kicked this controversy off for that matter). But I note from the abstracts that the second paper you link to was published in 2006 as a response to a earlier Bem paper from 2003.

That’s interesting. I don’t remember any of the recent commenters pointing out that Bem had done earlier research into precognition/ESP. Instead they mentioned his high standing in the community and history of mainstream research. How did they miss the point that he had already published in ESP, that replication studies were attempted, and that they failed to reproduce his results?

If I was really serious I would go and read through those replicate studies in detail, and try and spot flaws or strengths that might explain why they’re getting different results than the originals. Of course, that would require the domain-specific knowledge, the lack of which prevents me from meaninfully assessing the original paper either.

leave a comment