Saturday, July 24, 2021

Experimental Evidence for ESP Is Well-Replicated

While examing the Science subreddit on www.reddit.com (www.reddit.com/r/science) the other day, I noticed there is a new meta-analysis about ESP experiments.  The meta-analysis is an interesting case example of presenting evidence for paranormal phenomena in pretty much the most hard-to-unravel way possible. If he works very hard, and uses some geeky little computer tricks, it is possible for a reader to get to the core data that is compelling evidence for extrasensory perception. But it is almost as if the authors were trying to minimize the chance of readers discovering such core data.  In this post I will discuss that core data in a way that saves you from doing all that hard work. 

The meta-analysis ("Anomalous perception in a Ganzfeld condition - A meta-analysis of more than 40 years investigation" by P. Tressoldi and Lance Storm) discusses ESP experiments using what is called the Ganzfeld protocol.  A ganzfeld experiment is one in which a test for extra-sensory perception is combined with sensory deprivation achieved through methods such as cutting a ping-pong ball in half and taping it over someone's eyes, and having someone wear an earphone transmitting white noise. In these ESP experiments, the expected chance hit rate (matching of a user's selection and a random target) is 25%. Ganzfeld experiments have a long history of scoring a "hit rate" well over the expected chance result of 25%. 

What we want to know upon reading the new meta-analysis is: how high a "hit rate" did the experiments score? Unfortunately, the authors have made it ridiculously hard to discover this key number. The meta-analysis authors mention "hit rates" far about 25% reported by other meta-analysis papers. But nowhere in their paper do they report the "hit rate" found by their meta-analysis. 

Instead, the authors report what statisticians call an "effect size." The concept of an effect size will not be clear to non-scientists or non-mathematicians.  But everyone can understand that if a long series of ESP experiments reports an average "hit rate" far above the expected-by-chance "hit rate" of 25%, then you have powerful experimental evidence for ESP. 

There is a way to get the "hit rate" reported by this meta-analysis, but it requires some geeky tricks that few readers would naturally achieve. If you click the link here provided by the paper, you will find a page with a series of links on the left side. If you click the third link in this series, you will see a table with some experimental results. But you will not see the full set of experimental results used in the meta-analysis.  You will see only 50 rows. There is then a link that says, "This dataset contains more than 50 rows. To see the remaining content please download the original file."  There is a link that allows you to download a spreadsheet file (GZMADatabase1974_2020.xlsx). Part of it is shown below.


ESP Results

What if you don't have a spreadsheet program on your computer? Then you're out of luck, and can't discover the key number of the "hit rate."

There is no excuse for presenting such road blocks to the reader. Web sites since the early 1990's have been perfectly capable of displaying the simple tabular data that is in this spreadsheet, by using the HTML protocol used since the early 1990's, a protocol fully capable of displaying tabular data. There is no reason why such tabular data could not have been fully displayed in the meta-analysis paper, so users would not have to fool around with external links and downloads.  And there's no reason why the paper could not have included a single sentence summarizing the number of trials, number of successful hits, and hit rate. 

But what happens if you are lucky enough to have a spreadsheet program on your computer, and you can download the spreadsheet, and view the experimental data? Then you still won't get the key number of the average "hit rate" reported by the meta-analysis.  For the spreadsheet table doesn't include a line summarizing the results in the table. 

But by using some hard-core geeky tricks, we can remedy this situation. You have to do this (something that would not occur to 1 reader in 100):

  • In cell G115 of the spreadsheet, type this: =SUM(G2:G114)
  • In cell H115 of the spreadsheet, type this: =SUM(H2:H114)
  • In cell K115 of the spreadsheet, type this: =AVERAGE(K2:K114)

Now finally, we get the "bottom line" numbers, shown in the last line of the screen shot below. From 1974 to 2020 there were 113 ESP experiments using the Ganzfeld protocol, which involved a total of 4841 trials and 1520 successful "hits," which was an average success rate of 31.5%, much higher than the rate expected by chance, which is only 25%. 

ESP experiments

Why haven't our meta-analysis authors communicated to us this very clear "bottom line" result, which anyone can understand is a result that is extraordinarily unlikely to have occurred by chance? Why have they only informed us of their results using only an "effect size" that few layman understand? It is as if the authors were doing everything they could to obscure the evidence for ESP they have found.  Indeed, the authors have failed to even use any of the terms commonly used for describing ESP experiments. They have not used the words commonly used in the literature, words such as "psi," "ESP," "extrasensory perception," "telepathy," "clairvoyance" or "mind reading." Instead they have merely used the vague term "anomalous perception," as if they were trying to minimize the number of times their meta-analysis would be found by people doing a Google search for information about ESP. 

Although some of the people gathering such evidence are clumsy about clearly communicating their impressive results, the experimental evidence for extrasensory perception is very strong and very well-replicated.  Using the Ganzfeld technique, ESP researchers have achieved a high-level of experimental replication. But the Ganzfeld results are by no means the best evidence for ESP.  The best evidence consists of (1) earlier tests reported by people such as Rhine and Riess, in which some subjects reported results we would never expect any subject to get by chance even if every person in the world was tested for ESP every week (see here, here and here for examples);  (2) two-hundred years of observational reports of clairvoyance, in which some subjects were vastly more successful than any person ever should have been by chance or coincidence (see here, here, here, here, here, here, here and here for examples). 

No one has any neural explanation for how a brain could produce psi effects such as ESP. Evidence for ESP is fatal to the claim that the human mind is merely a product of the brain.  This is why people who maintain that claim have again and again so stubbornly refused to admit the existence of ESP. They almost always take a "head in the sand" approach, simply refusing to examine the evidence on this topic.  Such mindless non-scholarship is a very strong "red flag" suggesting their beliefs about the brain and mind are fundamentally wrong.  Two of the biggest "red flags" you can have suggesting that someone's beliefs are dogma rather than scientifically warranted are (1) a refusal to seriously study a very large body of important observational reports relevant to such beliefs; (2) a frequent tendency to occasionally make untrue statements about factual matters related to your belief claims.  Very many professors following the "brains make minds" dogma frequently display both of these "red flags."  

Postscript: The 1961 book Challenge of Psychical Research by Gardner Murphy discusses some of the experimental evidence for ESP.  Beginning on page 57, the author discusses a series of experiments he did with a student named Van Dam. The student was blindfolded, and put in a sealed cubicle in one room. In another room, someone chose by lot one of the squares in the grid below.

The blindfolded Van Dam was asked in the other room to guess the square chosen. There were 187 trials done on 7 different days. The expected result by chance was only 4 successes. The actual number of successes was 60, a success rate of nearly 30%.  You would not expect a result half as good to ever occur by chance if every person in the world were to be tested. 

The pages preceding page 75 discuss the Pearce-Pratt ESP experiment involving two people in different buildings. We read on page 75 there were 558 successes in 1850 trials, for a success rate of 30%, in a situation where the expected chance result was only 20% or 370 successes. The probability of getting such a result by chance was calculated at less than 1 in 10 to the twenty-second power, less that 1 in ten billion trillion. 

No comments:

Post a Comment