Saturday, July 24, 2021

Experimental Evidence for ESP Is Well-Replicated

While examing the Science subreddit on www.reddit.com (www.reddit.com/r/science) the other day, I noticed there is a new meta-analysis about ESP experiments.  The meta-analysis is an interesting case example of presenting evidence for paranormal phenomena in pretty much the most hard-to-unravel way possible. If he works very hard, and uses some geeky little computer tricks, it is possible for a reader to get to the core data that is compelling evidence for extrasensory perception. But it is almost as if the authors were trying to minimize the chance of readers discovering such core data.  In this post I will discuss that core data in a way that saves you from doing all that hard work. 

The meta-analysis ("Anomalous perception in a Ganzfeld condition - A meta-analysis of more than 40 years investigation" by P. Tressoldi and Lance Storm) discusses ESP experiments using what is called the Ganzfeld protocol.  A ganzfeld experiment is one in which a test for extra-sensory perception is combined with sensory deprivation achieved through methods such as cutting a ping-pong ball in half and taping it over someone's eyes, and having someone wear an earphone transmitting white noise. In these ESP experiments, the expected chance hit rate (matching of a user's selection and a random target) is 25%. Ganzfeld experiments have a long history of scoring a "hit rate" well over the expected chance result of 25%. 

What we want to know upon reading the new meta-analysis is: how high a "hit rate" did the experiments score? Unfortunately, the authors have made it ridiculously hard to discover this key number. The meta-analysis authors mention "hit rates" far about 25% reported by other meta-analysis papers. But nowhere in their paper do they report the "hit rate" found by their meta-analysis. 

Instead, the authors report what statisticians call an "effect size." The concept of an effect size will not be clear to non-scientists or non-mathematicians.  But everyone can understand that if a long series of ESP experiments reports an average "hit rate" far above the expected-by-chance "hit rate" of 25%, then you have powerful experimental evidence for ESP. 

There is a way to get the "hit rate" reported by this meta-analysis, but it requires some geeky tricks that few readers would naturally achieve. If you click the link here provided by the paper, you will find a page with a series of links on the left side. If you click the third link in this series, you will see a table with some experimental results. But you will not see the full set of experimental results used in the meta-analysis.  You will see only 50 rows. There is then a link that says, "This dataset contains more than 50 rows. To see the remaining content please download the original file."  There is a link that allows you to download a spreadsheet file (GZMADatabase1974_2020.xlsx). Part of it is shown below.


ESP Results

What if you don't have a spreadsheet program on your computer? Then you're out of luck, and can't discover the key number of the "hit rate."

There is no excuse for presenting such road blocks to the reader. Web sites since the early 1990's have been perfectly capable of displaying the simple tabular data that is in this spreadsheet, by using the HTML protocol used since the early 1990's, a protocol fully capable of displaying tabular data. There is no reason why such tabular data could not have been fully displayed in the meta-analysis paper, so users would not have to fool around with external links and downloads.  And there's no reason why the paper could not have included a single sentence summarizing the number of trials, number of successful hits, and hit rate. 

But what happens if you are lucky enough to have a spreadsheet program on your computer, and you can download the spreadsheet, and view the experimental data? Then you still won't get the key number of the average "hit rate" reported by the meta-analysis.  For the spreadsheet table doesn't include a line summarizing the results in the table. 

But by using some hard-core geeky tricks, we can remedy this situation. You have to do this (something that would not occur to 1 reader in 100):

  • In cell G115 of the spreadsheet, type this: =SUM(G2:G114)
  • In cell H115 of the spreadsheet, type this: =SUM(H2:H114)
  • In cell K115 of the spreadsheet, type this: =AVERAGE(K2:K114)

Now finally, we get the "bottom line" numbers, shown in the last line of the screen shot below. From 1974 to 2020 there were 113 ESP experiments using the Ganzfeld protocol, which involved a total of 4841 trials and 1520 successful "hits," which was an average success rate of 31.5%, much higher than the rate expected by chance, which is only 25%. 

ESP experiments

Why haven't our meta-analysis authors communicated to us this very clear "bottom line" result, which anyone can understand is a result that is extraordinarily unlikely to have occurred by chance? Why have they only informed us of their results using only an "effect size" that few layman understand? It is as if the authors were doing everything they could to obscure the evidence for ESP they have found.  Indeed, the authors have failed to even use any of the terms commonly used for describing ESP experiments. They have not used the words commonly used in the literature, words such as "psi," "ESP," "extrasensory perception," "telepathy," "clairvoyance" or "mind reading." Instead they have merely used the vague term "anomalous perception," as if they were trying to minimize the number of times their meta-analysis would be found by people doing a Google search for information about ESP. 

Although some of the people gathering such evidence are clumsy about clearly communicating their impressive results, the experimental evidence for extrasensory perception is very strong and very well-replicated.  Using the Ganzfeld technique, ESP researchers have achieved a high-level of experimental replication. But the Ganzfeld results are by no means the best evidence for ESP.  The best evidence consists of (1) earlier tests reported by people such as Rhine and Riess, in which some subjects reported results we would never expect any subject to get by chance even if every person in the world was tested for ESP every week (see here, here and here for examples);  (2) two-hundred years of observational reports of clairvoyance, in which some subjects were vastly more successful than any person ever should have been by chance or coincidence (see here, here, here, here, here, here, here and here for examples). 

No one has any neural explanation for how a brain could produce psi effects such as ESP. Evidence for ESP is fatal to the claim that the human mind is merely a product of the brain.  This is why people who maintain that claim have again and again so stubbornly refused to admit the existence of ESP. They almost always take a "head in the sand" approach, simply refusing to examine the evidence on this topic.  Such mindless non-scholarship is a very strong "red flag" suggesting their beliefs about the brain and mind are fundamentally wrong.  Two of the biggest "red flags" you can have suggesting that someone's beliefs are dogma rather than scientifically warranted are (1) a refusal to seriously study a very large body of important observational reports relevant to such beliefs; (2) a frequent tendency to occasionally make untrue statements about factual matters related to your belief claims.  Very many professors following the "brains make minds" dogma frequently display both of these "red flags."  

Postscript: The 1961 book Challenge of Psychical Research by Gardner Murphy discusses some of the experimental evidence for ESP.  Beginning on page 57, the author discusses a series of experiments he did with a student named Van Dam. The student was blindfolded, and put in a sealed cubicle in one room. In another room, someone chose by lot one of the squares in the grid below.

The blindfolded Van Dam was asked in the other room to guess the square chosen. There were 187 trials done on 7 different days. The expected result by chance was only 4 successes. The actual number of successes was 60, a success rate of nearly 30%.  You would not expect a result half as good to ever occur by chance if every person in the world were to be tested. 

The pages preceding page 75 discuss the Pearce-Pratt ESP experiment involving two people in different buildings. We read on page 75 there were 558 successes in 1850 trials, for a success rate of 30%, in a situation where the expected chance result was only 20% or 370 successes. The probability of getting such a result by chance was calculated at less than 1 in 10 to the twenty-second power, less that 1 in ten billion trillion. 

Saturday, July 10, 2021

Most Scientists Don't Follow Formal Evidence Standards, Unlike Judges

The www.realclearscience.com site is a typical "science news" site: a strange mixture of hard fact, speculations, often-dubious opinions, spin, clickbait, hype and corporate propaganda, all under the banner of "science."  I noticed an enormous contrast between one of the site's articles appearing yesterday, and another article appearing today.

The link that appeared yesterday was a link to a very give-you-the-wrong-idea article by scientist Adam Frank, one with the swaggering title, "The most important boring idea in the universe."  This idea that Frank says is so important is the claim that "scientific knowledge" rests upon "mutually agreed standards of evidence." 

Frank attempts to persuade us that after arguing for a long time, scientists agreed on "standards of evidence" that they are now faithfully following. He writes the following:

"There were lots of wrong turns in figuring out what counted as meaningful evidence and what was just another way of getting fooled. But over time, people figured out that there were standards for how to set up an experiment, how to collect data from it, and how to interpret that data. These standards now include things like isolating the experimental apparatus from spurious environmental effects, understanding how data collection devices respond to inputs, and accounting for systematic errors in analyzing the data. There are, of course, many more."

The idea that Frank tries to plant is a false one. Scientists never agreed upon some "standard of evidence" that would be used in judging how experiments or observations should be done or whether scientific papers should be published or publicized.  There is no formal written "standard of evidence" used by scientists. Conversely, courts do actually make use of formal written standards of evidence. 

When you go to www.rulesofevidence.org, you will find the Federal Rules of Evidence used in US federal courts.  The page here lists about 68 numbered rules of evidence used in this evidence standard. Here are some examples:

  • Rule 404: "Evidence of a person’s character or character trait is not admissible to prove that on a particular occasion the person acted in accordance with the character or trait."  (There are quite a few exceptions listed.) 
  • Rule 608: " A witness’s credibility may be attacked or supported by testimony about the witness’s reputation for having a character for truthfulness or untruthfulness, or by testimony in the form of an opinion about that character. But evidence of truthful character is admissible only after the witness’s character for truthfulness has been attacked."
  • Rule 610: "Evidence of a witness’s religious beliefs or opinions is not admissible to attack or support the witness’s credibility." 

There are more than 60 other rules in the Federal Rules of Evidence. US Federal Courts have a formal written set of evidence standards. But scientists have no such thing.  The impression that Frank has attempted to insinuate (that scientists operate under formal standards of evidence that they carefully worked out after long debate) is not correct.

There are no formal detailed written evidence standards in any of the main branches of science.  In biology, poorly designed experiments following bad practices are extremely common.  In theoretical biology and physics, it is extremely common for scientists to publish papers based on the flimsiest or wildest of speculations. When we read scientific papers such as those speculating about a multiverse consisting of many unobserved universes, we are obviously reading papers written by authors following no standards of evidence at all. It's pretty much the same for any of the thousands of papers that have been written about never-actually-observed things such as abiogenesis, dark matter, dark energy or primordial cosmic inflation.

In fields such as paleontology, elaborate speculation papers can be based on the flimsiest piece of ancient matter or the tiniest bone fragment; and many papers in that field are not based on specific fossils.  Then there are endless chemistry papers not based on actual physical experiments but on "chemical reactions" merely occuring on paper, a blackboard, or inside a computer program. Countless papers in many fields are based on mere computer simulations or abstruse speculative math rather than physical experiments or observations. 

On the next day after the www.realclearscience.com site published a link to Frank's article, it published a link to an article that very much contradicted his insinuations that scientists are adhering to sound standards of evidence. The link was to an article on www.reason.com entitled "How Much Scientific Research Is Actually Fraudulent?"

Here are some quotes from the article:

"Fraud may be rampant in biomedical research. My 2016 article 'Broken Science' pointed to a variety of factors as explanations for why the results of a huge proportion of scientific studies were apparently generating false-positive results that could not be replicated by other researchers. A false positive in scientific research occurs when there is statistically significant evidence for something that isn't real (e.g., a drug cures an illness when it actually does not). The factors considered included issues like publication bias, and statistical chicanery associated with p-hacking, HARKing, and underpowered studies....A 2015 editorial in The Lancet observed that 'much of the scientific literature, perhaps half, may simply be untrue.' A 2015 British Academy of Medical Sciences report suggested that the false discovery rate in some areas of biomedicine could be as high as 69 percent. In an email exchange with me, Ioannidis estimated that the nonreplication rates in biomedical observational and preclinical studies could be as high as 90 percent....Summarizing their results, an article in Science notes, 'More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature. And one in 12 [8 percent] admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research results.' Daniele Fanelli, a research ethicist at the London School of Economics, tells Science that 51 percent of researchers admitting to questionable research practices 'could still be an underestimate.' "

Such comments are consistent with my own frequent examination of neuroscience research papers. When examining such papers, I seem to find that Questionable Research Practices were used most of the time. Almost always, the papers include study group sizes that are less than the reasonable standard of having at least 15 subjects in every study group, meaning there is a high chance of a false alarm. Most of the times, the papers fail to show evidence that any blinding protocol was used. The detailed elucidation and following of a rigorous blinding protocol is an essential for almost any experimental neuroscience study to be regarded as reliable. Few papers follow the standard of pre-registering a hypothesis and methods for data gathering and analysis, leaving the researchers free to follow an approach rather like "torture the data until it confesses" to what the researcher is hoping to find. 

torture data until it confesses

What this means is that the great majority of times you read about some neuroscience research on some science news site, you are reading about an unreliable result that should not be taken as robust evidence of anything. 

bad neuroscience practices


Frank mentioned "best practices," trying to insinuate that scientists follow such practices. He fails to tell us about the large fraction of scientists that follow shoddy practices.  Frank attempted to portray scientists as "follow strictly the good rules" guys acting like judges in a court. But it seems that a large fraction of scientists are like cowboys in the Wild West pretty much doing whatever they fancy.  And so many of the gun blasts from such cowboys are just noise. 

Sunday, July 4, 2021

When You Read "It Is Widely Believed," Suspect a Dubious Belief Custom

We can classify several different types of scientific truth claims, along with some tips on how to recognize the different types. 

Type of truth claim

How to recognize it

Citation of established fact

Typically occurs with a discussion of the observational facts that proved the claim.

Citation of a claim that is not yet established fact

Typically occurs with phrases such as “scientists believe” or “it is generally believed” or an appeal to a “scientific consensus.” The claim of a “scientific consensus” is often unfounded, and there may be many scientists who do not accept the claim.

Citation of a claim that has little basis in observations, and that there may be good reasons for doubting

Often occurs with a phrase such as “it is widely believed,” or maybe a more confident-sounding phrase like “it is becoming increasingly clear” or “there is growing evidence.”


Claims that memories are stored in synapses fall into the third of these categories. To show that, I may cite some of the many times in which writers or scientists suggested that memories are stored in synapses, and merely used the weak phrase "it is widely believed" as their authority. 

  • "It is widely believed that synaptic plasticity mediates learning and memory"  (link)
  • "It is widely believed that synapses in the forebrain undergo structural and functional changes, a phenomenon called synaptic plasticity, that underlies learning and memory processes" (link).
  • "It is widely believed that synaptic modifications underlie learning and memory" (link).
  • "As with other forms of synaptic plasticity, it is widely believed that it [spike-dependent synaptic plasticity] underlies learning and information storage in the brain" (link).
  • "It is widely believed that memories are stored as changes in the number and strength of the connections between brain neurons, called synapses" (link).
  • "It is widely believed that modifications to synaptic connections – synaptic plasticity – represent a fundamental mechanism for altering network function, giving rise to phenomena collectively referred to as learning and memory" (link).
  • "It is widely believed that encoding and storing memories in the brain requires changes in the number, structure, or function of synapses"  (link).
  • "It is widely believed that long-term changes in the strength of synaptic transmission underlie the formation of memories" (link).
  • "It is widely believed that the brain's microcircuitry undergoes structural changes when a new behavior is learned" (link).
  • "It is widely believed that long-lasting changes in synaptic function provide the cellular basis for learning and memory in both vertebrates and invertebrates (link).
  • "It is widely believed that the brain stores memories as distributed changes in the strength of connections ('synaptic transmission') between neurons" (link).
  • "It is widely believed that the long-lasting, activity-dependent changes in synaptic strength, including long-term potentiation and long-term depression, could be the molecular and cellular basis of experience-dependent plasticities, such as learning and memory" (link).
  • "It is widely believed that a long-lasting change in synaptic function is the cellular basis of learning and memory" (link).
  • "It is widely believed that the modification of these synaptic connections is what constitutes the physiological basis of learning" (link).
  • "It is widely believed that memory traces can be stored through synaptic conductance modification" (link).
  • "It is widely believed that memories are stored in the synaptic strengths and patterns between neurons" (link).
  • "It is widely believed that long-term changes in the strength of synaptic connections underlie learning and memory" (link).
  • "It is widely believed that long-term synaptic plasticity plays a critical role in the learning, memory and development of the nervous system" (link).
  • "It is widely believed that learning is due, at least in part, to long-lasting modifications of the strengths of synapses in the brain" (link).
  • "It is widely believed that long-term memories are stored as changes in the strengths of synaptic connections in the brain" (link). 
  • "It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory" (link).
  • "It is widely believed that synaptic modifications are one of the factors underlying learning and memory" (link).
  • "Learning, it is widely believed, is based on changes in the connections between nerve cells" (link).
  • "It is widely believed that memories are stored as changes in the number and strength of the connections between brain cells (neurons)" (link).
  • "It is widely believed that memories are stored as changes in the strength of synaptic connections between neurons" (link). 
  • "It is widely believed that memory formation is based on changes in synapses" (link).

There is no good evidence that any memories are stored in synapses or stored through a strengthening of synapses or stored by a modification of synapse weights, or stored anywhere in the human brain through any means. No one has any understanding or any credible coherent theory of how learned information or episodic memories could ever be stored using synapses or any other part of the brain. We know of the strongest reason for rejecting all of the claims in the bullet list above, which is that the average lifetime of the proteins in synapses is only about two weeks or less.  The proteins in synapses last an average of only about a thousandth of the longest length of time that humans can remember things (50 years or more). Moreover, humans can form permanent new memores instantly, which could never occur if forming such memories required synapse strengthening (something that would take minutes or hours, because it would require the synthesis of new proteins). 

The examples in the bullet list above are simply an example of a speech custom. Scientists and science writers have got in the bad habit of saying something like "it is widely believed that memory formation occurs through changes in synapses." The fact that such a large fraction of the writers repeating this myth use the same language phrasing (including the phrase "it is widely believed") shows that what is going on is mainly people parroting what other people have said, rather than independently reaching intelligent judgments based on facts.  I may note that in not a single one of these cases has any of these writers even claimed a scientific agreement, or even a majority of scientist opinion.  Claiming that something is "widely believed" is to make a claim much weaker than claiming "almost everyone believes" or "most people believe." When people haven't got much of a case, they use phrases like "it is widely believed." 

In general, when you hear or read someone using the phrase "it is widely believed," you should suspect a dubious belief custom or a misguided belief.  For example, if someone says "it is widely believed you can't trust men from that country," he is saying something that means very little. And if someone says, "it is widely believed that the thirteenth day of the month is unlikely," you are probably just hearing an old wives tale.  Because they all use the weak shaky phrase "it is widely believed," every statement in my bullet list above should be treated as a "red flag" indicating a lack of good evidence.