Friday, September 19, 2025

Parapsychologists Should Not Imitate Bad Research Habits of Neuroscientists

 In this post I will criticize an experiment attempting to show the existence of a mind-over-matter effect. My criticism does not at all arise from any disbelief in the existence of psi phenomena such as telepathy and ESP. I have written many posts presenting the evidence for ESP and clairvoyance, which you can read by reading the series of posts here and here, and continuing to press Older Posts at the bottom right. At the end of this post I will give a bullet list presenting some of the best evidence for paranormal psi effects. I regard the evidence for ESP (telepathy) and clairvoyance as being very good. 

The experiment I refer to is described in the paper "Enhanced mind-matter interactions following rTMS induced frontal lobe inhibition" which you can read here. The authors start out by giving some speculative reasons for thinking that the brain may suppress or inhibit paranormal powers of humans. My concern that the authors have gone astray begins when I read the passage below:

"Based on our findings in the two individuals with damage to their frontal lobes, we adopted a new approach to help determine whether the left medial middle frontal region of brain acts [as] a filter to inhibit psi. This was the use of repetitive transcranial magnetic stimulation (rTMS) to induce reversible brain lesions in the left medial middle frontal region in healthy participants."

Reversible brain lesions? That sounds like dangerous fiddling that might go wrong and cause permanent damage to the brain.  On this blog I have strongly criticized neuroscience experiments that may endanger human subjects, in the series of posts you can read here  and here. I must be consistent here, and criticize just as strongly parapsychologists using similar techniques. There are very many entirely safe ways to test whether humans have paranormal psychical abilities, and I describe one of them later in this post. So I don't understand why anyone would feel justified in zapping someone's brain in an attempt to provide evidence for such abilities.  There's no need for such high-tech gimmickry. Abilities such as mind-over-matter and telepathy can be tested in simple ways that do not have any reliance on technology.  I would strongly criticize any conventional neuroscience experiment that claimed to be inducing "reversible brain legions."  I must just as strongly attack this experiment for doing such fiddling. 

The paper describing the experiment tells us about some experimental setup in which people were asked to change the output of a random number generator generating numbers of  0 and 1, by looking at a computer screen showing an arrow, and willing the arrow to move in a particular direction. The idea was that if there were a lot more 0 numbers than 1 numbers, an arrow on the middle of a screen would move towards one end of the screen; and  if there were a lot more 1 numbers than 0 numbers, an arrow on the middle of a screen would move towards another end of the screen. 

We have some paragraphs describing the way the data was analyzed, and the results.  Here the paper fails to follow a golden rule for parapsychology experiments.  That rule is: keep things as simple as possible, so that readers will well-understand any impressive results achieved, and so that readers will have a minimum tendency to suspect that some statistical sleight-of-hand is occurring. The importance of this rule cannot be over-emphasized.  Skeptics and materialist scientists start out by being skeptical of claims of paranormal abilities.  The more complex your experimental setup and data analysis, the easier it will be for them to ignore or belittle your results. 

There is a golden rule of  computer programming, the rule of KISS, which stands for Keep It Simple, Stupid.  People doing parapsychology experiments should follow the same rule. The more complex the experimental setup, the easier it will be for skeptics to suggest that some kind of "smoke and mirrors" was going on. 

The authors of the paper "Enhanced mind-matter interactions following rTMS induced frontal lobe inhibition" have violated this rule. They give us some paragraphs of statistical gobbledygook about their results, and fail to communicate an impressive result in any way that even 10% of their readers will be able to understand. 

The authors state this:

"As predicted by our a priori hypothesis, we demonstrated that healthy participants with reversible rTMS induced lesions targeting the left medial middle frontal brain region showed larger right intention effects on a mind-matter interaction task compared to healthy participants without rTMS induced lesions. This significant effect was found only after we applied a post hoc weighting procedure aligned with our overarching hypothesis."

It sounds like the raw results of their experiment failed to show a significant effect, but that by "calling an audible" after gathering data to try to "save the day" by introducing some unplanned statistical trick, the authors were able to claim a significant result.  This sounds like the kind of sleazy maneuver that neuroscientists so often use to try to gin up a result showing "statistical significance."   Instead of acting like badly behaving neuroscientists, our authors should have used pre-registration (also called the use of a registered report). The authors should have published a very exact plan on how data would be gathered and analyzed, before gathering any data. They then should have stuck to such a plan.  If this resulted in a non-significant effect or null result, they should have called that result a non-significant effect or null result. 

Parapsychologists should not be aping the bad habits of neuroscientists, whether it be zapping brains in a potentially dangerous way, or following "keep torturing the data until it confesses" tactics.  Parapsychologists should be following experimental best practices. 

The experimental evidence for telepathy (also called extra-sensory perception or ESP) is very good. We have almost two hundred years of compelling evidence for the phenomenon of clairvoyance, a type of extrasensory perception occurring when a person is asked to describe something he cannot see and does not know about. It is not correct that serious study of this topic began about 1882 with the founding of the Society for Psychical Research in England, as often claimed. Serious rigorous investigation of the topic of clairvoyance dates back as far as 1825, with the 1825-1831 report of the Royal Academy of Medicine finding in favor of clairvoyance. Serious scholarly investigation of clairvoyance occurred many times between 1825 and 1882.  Such investigations often involved subjects who were hypnotized, with many investigators reporting clairvoyance from hypnotized subjects or subjects who were in a trance.  Experimental investigation of telepathy occurred abundantly in the twentieth century, with many university investigators such as Joseph Rhine of Duke University reporting spectacular successes. 

You can read up about some of the evidence for such things by reading some of my posts and free online books below:

There is a simple way for you to test this subject yourself, by doing quick tests with your friends and family members. I will now describe a quick and simple way of doing such tests that I have  found to be highly successful, as I report in another post here. I have no idea whether you will get similar success yourself, but I would not be surprised if you do. Below are some suggestions:

(1) Test ideally using family members or close friends.  I don't actually have any data showing that tests of this type are more likely to be successful using family members or close friends, but I can simply report that I have had much success testing with family members.

(2) Ask the family member or friend to guess some unusual thing that you have seen with your eyes or seen in a dream.  Announce this simply as a guessing game, rather than some ESP or telepathy test. For example, you might say something like, "I saw something unusual today -- I'll give you four guesses to guess what it was."  Or you might say, "I dreamed of something I rarely dream of -- I'll give you four guesses to guess what it was." 

(3) Do not give any clues about your guess target, or give only a very weak clue. Your ESP test will be trying to find some case of a guess matching a guess target, with such a thing being extremely improbable.  You will undermine such an attempt if you give any good clue, such as "I'm thinking of an animal" or "I'm thinking of something in our house." If you give a clue, give only a very weak one such as "I saw something unusual on my walk today, can you guess what it was," or "I had a dream about something I rarely dream of, can you guess what it was." 

(4) Be sure to suggest that the person try three or four guesses rather than a single guess. I have noticed a strong "warm up" effect when occasionally trying tests like this. I have noticed that the first guess someone makes usually fails, but that the second or third guess is often correct or close to the answer. For example, not long ago I said to one of my daughters, "You'll never guess what I saw down the street." I gave no clues, but asked her to guess. After a wrong guess of an orange cat, her second guess was "a raccoon," which is just what I saw. No one in our family had seen such a thing on our street before. Later in the day I asked her what I saw in a weird dream I recently had, mentioning only that it involved something odd in our front yard. After a wrong first guess of a snowman, she asked, "Was it a wild animal?" I said yes. Then she asked, "Was it an elephant?" I said yes.

(5) After the person makes the first guess, suggest that the person take 10 seconds before making each of the next guesses. Throughout the entire guessing session, you should be trying hard to visualize the thing you are asking the person to guess. Slowing the process down by suggesting 10 seconds between guesses may increase the chances of your thought traveling to the subject you are testing. 

(6) Only test using a guess target that is some simple thing that you can clearly visualize in your mind.  Do not test using a guess target of some complicated scene involving multiple actions or interacting objects.  For example, don't ask someone to guess some scene you saw that involved someone dropping his coffee and spilling it on his feet. Use a guess target of some single object or a single animal or a single human. Testing with types of animals seems to work well. If the test object or animal has a strong color or some characteristic action, all the better. Do not test using some extremely common sights such as your family dog. Success with such a test will not be very impressive. It's better to use a rarer sight, maybe something you see as rarely as a donkey or a racoon. 

(7) Answer only yes or no questions, counting each question as one of the three or four allowed guesses.  You can include a single "You're getting warm" answer instead of a "no" answer, but no more than one.  

(8) Very soon after the test, write down the results, recording all guesses and questions, and any responses you made such as "yes" or "no."  With testing like this, the last thing you want to rely on is a memory of some event happening weeks ago.  Write down the results of your test, positive or negative, within a few minutes of an hour of making the test. 

(9) Do a single test (allowing three or four guesses) only about once every week or two weeks. There may be a significant fatigue factor in such tests. A person who does well on such a test may not continue to do well if you keep testing him on the same day. To avoid such fatigue and to avoid annoying people with too many tests, it is good to just suggest a casual test as described above, once every week or two weeks. Keep a long-term record of all tests of this type you have done, recording failures as well as successes. 

(10) It's best not to announce the test as an ESP test or as a telepathy test, but to describe it as a quick guessing game or a test of chance. Our materialist professors have senselessly succeeded in creating very much unreasonable prejudice and bias against psychic realities that are well-established. So the mere act of announcing an ESP test may cause your subject to raise mental barriers that may prevent any successful result. To avoid that, it is best to describe your test as a quick guessing game or a test of chance.  

(11) It's best to choose a guess target that you personally saw either in reality or in a dream.  The more personal connection you have with the guess target, the better. Something that you personally saw recently (either in reality or a dream) may work better than something you merely chose randomly. The more your recent sensory experience of the guess target, the better it is. Choosing a guess target of something you both saw and touched may work better than something you merely saw. The more you have thought about the guess target, the better. It's better that the object have one or two colors than many colors, and the brighter the color is, the better. 

(12) Be cautious in publicly reporting successful results.  I would wait until you get three or four good successful tests before reporting anything about such tests on anything like social media. Also, avoid reporting your results as evidence of anything, unless you have something very impressive to report. Social media has a horde of skeptics ready to attack you if you claim evidence for ESP based on slim results. A good rejoinder to such attacks is if you can say, "I'm not claiming anything, I'm just reporting what happened.

ESP test

Above we see some guess targets that were successfully guessed after only a few guesses, in trials in which the guesser was not told that the item was an animal or anything living.  There were about nine trials, with one or two other trials being unsuccessful, and one being a partial success.  The guess targets were only in my mind, and I compiled the visual above only after these items were guessed correctly. 

Tests that you do of this type will be unlikely to ever constitute any substantial contribution to the literature of parapsychology, unless you follow a very formal approach with an eye towards making such a contribution. But such tests may have the effect of helping you to realize or suspect extremely important truths about yourself and other human beings that you might never have realized. A person might read a dozen times about experiments suggesting something, but the truth of that thing may never sink in until that person has some first-hand experience with the thing.  

Whether ESP or telepathy can occur is something of very high philosophical importance. There is a reason why materialists show a very dogmatic refusal to seriously study the evidence for telepathy. It is that if telepathy can occur, the core assumptions of materialism must be false. Telepathy could never occur between brains, but might be possible between souls. So any personal evidence you may get of the reality of telepathy can be a very important step in your philosophical journey towards better understanding what humans are, and what kind of universe we live in. 

Using a binomial probability calculator it is possible to very roughly estimate the probability of getting success in a series of about nine tests like the one above. To use such a calculator, you have to have an estimate of the chance of success per trial. With tests like those I have suggested, it is hard to exactly estimate such a thing, because you are choosing a guess target that could be any of 100,000 different things.  One reasonable approach would be to assume 100,000 different guess possibilities. The chance of a successful guess in only four guesses can be calculated like this, giving a result of only .00004.

probability calculation

The screen above is using the StatTrek binomial probability calculator, which doesn't seem to work whenever the probability is much less than a million. A similar calculator is the Wolfram Alpha binomial probability calculator, which will work with very low probabilities. I used that calculator with the data described in my post here. The situation described in that post was:

  • Each correct guess had a probability no greater than about 1 in 10,000, as I never mentioned the category of what was to be guessed, but always merely asked a relative to guess after saying  something like "I saw something today, try and guess what it was" or "I dreamed of something today, try to guess what it was."
  • Counting all questions asked (which were all "yes or no" questions) as a guess, there were in about nine guessing trials involving nine targets a total of about 37 guesses. 
  • Six times the guess target was correctly guessed within a few guesses, and one time the answer was wrong but close (with a final guess of a red bicycle rather than a red double-decker bus, both being red vehicles).  
Counting the close guess as a failed attempt, I entered this data into the Wolfram Alpha binomial probability calculator, getting these results (with this calculator the "number of successes" is referred to as the "endpoint"):

ESP test result

Having a probability of less than about 1 in .00000000000000001, it would be very unlikely for anyone to ever get a result as successful by mere chance, even if every person on planet Earth were to try such a set of trials. You can use the same 
Wolfram Alpha binomial probability calculator to get a rough estimate of the likelihood of results you get. 

I mention using a binomial probability calculator above, but just ignore such a thing if you find it confusing, because the use of such a calculator is just some optional "icing on the cake" that can be used after a successful series of tests. The point of the tests I suggest here is not to end up with some particular probability number, but mainly to end up with an impression in your mind of whether you were able to get substantive evidence that telepathy or mind reading is occurring. Such an impression may be a valuable clue that tends to point you in the right direction in developing a sound worldview. Some compelling personal experience with telepathy may save you from a lifetime of holding the widely taught but unfounded and untenable dogma that you are merely a brain or merely the result of some chemical activity in a brain.  Getting such experience, you may embark on further studies leading you in the right direction. Keep in mind that a negative test never disproves telepathy, just as failing to jump a one-meter hurdle does nothing to show that people can never jump one-meter hurdles. 

In the academic literature of ESP testing, we often read about the use of Zener cards, cards in which there are five abstract symbols. While using such cards has the advantage of allowing precise estimates of probability,  there is no particular reason to think that better results will be obtained when using such cards. To the contrary, it may be that impressive results are much less likely to be obtained using such cards, and that ESP tests work better when living or tangible guess targets are used such as a living animal or a tangible object. 

A very important point I must reiterate is that when trying tests such as I have suggested, it is crucial to allow for a second, third and fourth guess, with at least ten seconds between guesses (during which the person thinking of the guess target tries to visualize the guess target).  In my testing the correct guesses tend to come on the second, third or fourth try. 

The results mentioned above are not by any means the best result I have got in a personal ESP test. The beginning of my very interesting post "Spookiest Observations: A Deluxe Narrative" describes a much more impressive result I got long ago, in a different type of test than the type of test described above.  

Wednesday, September 17, 2025

More Candid Confessions of the Neuroscientists

 In my post "Candid Confessions of the Cognition Experts" which you can read here, and another similar post, I quote some cognition experts and neuroscientists who make confessions about matters such as the sorry state of neuroscience research, and how little neuroscientists understand how minds arise. Below are some more quotes of this type. 

I'll start with some quotes mostly using the phrase "in its infancy." Whenever scientists confess that something is "in its infancy," they are effectively admitting they do not have good knowledge about such a topic.

  • "Despite recent advancements in identifying engram cells, our understanding of their regulatory and functional mechanisms remains in its infancy." -- Scientists claiming erroneously in 2024 that there have been recent advancements in identifying engram cells, but confessing there is no understanding of how they work (link).
  • "Study of the genetics of human memory is in its infancy though many genes have been investigated for their association to memory in humans and non-human animals."  -- Scientists in 2022 (link).
  • "The neurobiology of memory is still in its infancy." -- Scientist in 2020 (link). 
  • "The investigation of the neuroanatomical bases of semantic memory is in its infancy." -- 3 scientists, 2007 (link). 
  • "Currently, our knowledge pertaining to the neural construct of intelligence and memory is in its infancy." -- Scientists, 2011 (link). 
  • "But when it comes to our actual feelings, our thought, our emotions, our consciousness, we really don't have a good answer as to how the brain helps us to have those different experiences." -- Andrew Newberg, neuroscientist, Ancient AliensEpisode 16 of Season 14, 6:52 mark. 
  • "Dr Gregory Jefferis, of the Medical Research Council's Laboratory of Molecular Biology (LMB) in Cambridge told BBC News that currently we have no idea how the network of brain cells in each of our heads enables us to interact with each other and the world around us."  -- BBC news article (link). 

By making such confessions, scientists are admitting that they do not actually understand how a brain could store or retrieve memories. The reason for such ignorance (despite billions of dollars on funding to try to answer such questions) is almost certainly that the brain does not actually store memories and is not the source of the human mind.  

A similar confession is found in the recent paper here, where scientists confess "It remains unclear where and how prior knowledge is represented in the brain." The truth is that no one has ever found the slightest evidence of any such thing as prior knowledge being represented in the brain, and no one understands how learned knowledge could ever be represented in a brain. 

An interesting paper is the paper "On the omission of researchers' working conditions in the critique of science: Critique of neuroscience and the position of neuroscientists in economized academia" by Eileen Wengemuth.  Wengemuth interviewed 13 neuroscientists about critiques of neuroscientists, apparently agreeing to quote them anonymously. She got some revealing quotes. 

A neuroscientist identified only as NW12 states this: "We still don't understand how molecules contribute to consciousness or the mind.”  On page 85 a neuroscientist identified only as NW2 makes a confession, which has a kind of "we must publish a paper even when we know it's junk" sound to it. First Wengemuth tells us this:

"One interviewee recounts an incident in which a new colleague pointed out a flaw in an experimental setup, which limited the validity of the conclusions drawn from the experiment. However, since she needed to have a publication soon, the interviewee [NW2]  describes that it seemed not possible to change the experimental setup and to repeat the experiment."

Immediately after that description, we have a quote from NW2:

NW2:  "She had a very good point and we never thought about it in two years of doing this experiment. We have a problem. (Both laugh) And nevertheless, we have to publish, because... you know, it's two years of work! So we will discuss this, we will account for it, we will try our best, but we probably don't want to rerun the whole experiment saying 'Oh, what happens if we change this other thing.' Once we've reached our conclusions..." 

I: "You said: 'But we still have to publish.' Did you mean, for example, that you got some grants and now you have to show, ok, we did something with that money?"

NW2: "Not so much based on grant money, but in terms of career. (...) I need papers to get my next job."

Get the idea? "The show must go on" as they say in the theater business. And apparently scientific papers must be published, to advance the career goals of neuroscientists,  even after it has become clear that bad methods were used (which seems like the majority of the time in contemporary neuroscience research). Discussing the quote above, Wengemuth says, "In this interview clip, it becomes clear that the interviewee perceives her working and research conditions as not allowing her to work in a way that would meet her own standards of good science."

On page 85 a neuroscientist identified only as NW9 seems to suggest that guys like him are playing fast-and-loose in their interpretations of what their experiments show, in order to get interesting-sounding claims that may increase the chance of publication in "high-impact" journals:

NW9: "We are working in a structure in which an increasing number of people are on third-party-funded positions, which are temporary. And one important criterion that decides who stays and who doesn't, is: who has published where? So journal impact factors. And publishing high impact often means: generalizing as much as possible in the interpretations and throwing as many limitations as possible overboard. "

Wengemuth describes what is occurring in that quote: "He argues here that the broad claims for which neuroscientists have been criticized also have to be understood as a way of getting one's article published in a high impact journal and thus increasing one's chances for a next job." Get the idea? Our neuroscientists are prioritizing career advancement over accuracy of statements.  It sounds like they are playing "fake it until you make it."

bribed neuroscientist
Yeah, right

A survey of Danish researchers found large fractions of them confessing to committing various types of shady or sleazy Questionable Research Practices.  A year 2024 follow-up study found a similar level of confession in other countries. The paper is entitled "Is something rotten in the state of Denmark? Cross-national evidence for widespread involvement but not systematic use of questionable research practices across all fields of research."  The title is inaccurate, because the confessions do reveal a systematic use of Questionable Research Practices.  Figure 2 of the paper reveals these confessions:

  • About 60% of the polled researchers confessed to citing literature without reading it. 
  • About 50% of the polled researchers confessed to reporting non-significant findings as evidence of no effect. 
  • About 50% of the polled researchers confessed to granting "honorary authorship" to authors who did not participate in the study. 
  • More than 50%  of the polled international researchers confessed to overselling results. 
  • About 50% of the polled researchers confessed to HARKing, which is when some hypothesis is dreamed up to explain results in an experiment not designed to test such a hypothesis. 
  • About 50% of the polled international researchers confessed to cherry-picking what data supports a hypothesis and what does not. 
  • About 40% of the polled international researchers confessed to data dredging or p-hacking, a practice some times described as "keep torturing the data until it confesses."
  • About 40% of the polled international researchers confessed to have refrained from reporting data that could weaken or contradict their findings. 
  • About 30of the polled international researchers confessed to gathering more data after the initially gathered data failed to show a significant effect. 

questionable research practices

smoke and mirrors neuroscience

Sunday, September 14, 2025

She Has the Most Astounding Memory Skills But a Smaller-Than-Average Hippocampus

The explanations of neuroscientists lack very many things. For example, they lack:

  • Any coherent or credible theory of how a brain could convert experience into brain states or synapse states when a memory is created;
  • any coherent or credible theory of how a brain could convert brain states or synapse states into thoughts or recollections when a memory is recalled;
  • any coherent or credible theory of how a brain could instantly create new memories, something that humans routinely do; 
  • any coherent or credible theory of how a brain could instantly retrieve a memory, such as instantly getting just the right answer when someone is asked to identify some person or object or technical term or historical event;
  • any coherent or credible theory of how a brain could create an abstract idea such as the idea of a child or the idea of a dog or the idea of a nation;  
  • any coherent or credible theory of how a brain could imagine something such as some invention no one ever built yet. 
A standard approach of the writers about the brain is to use the trick that can be described as: when you don't have a how, try using a where. For the typical writer about brains, this involves making claims of localization of brain activity.  For example:

  • When they don't have a "how" in regard to the brain and abstract thought, they try using a "where" by claiming that abstract thought comes from the frontal cortex. This claim is very dubious because of evidence I discuss in my post "Reasons for Doubting Thought Comes from the Frontal Lobes or Prefrontal Cortex" you can read here.
  • When they don't have a "how" in regard to the brain and memory, they try using a "where" by claiming that the hippocampus drives the wonders of memory. This claim is very dubious because of evidence I discuss in my post here. A key element of false claims about the hippocampus and memory is the claim that patient HM could not form new memories after his hippocampus was surgically ravaged. The claim that this patient could not form new memories after  such damage is untrue, as I show in my post here
People have been so conditioned to think that the hippocampus is crucial for memory that you can predict exactly what a typical reader of brain-related articles might say after reading this quote from a paper on the astonishing case of subject RS: "RS’s recall of past events appears profoundly rich in episodic detail, including associated tastes, smells, emotions, interactions with others, as well as more trivial details that others would likely forget (e.g., weather conditions). As mentioned above RS... apparently remembers the entire text from all seven Harry Potter novels." Hearing of  such amazing memory abilities far beyond the memory performance of almost everyone, a typical person might say something like: "Wow, she must have a really big hippocampus!"

But the astonishing subject RS actually has a hippocampus well-below average in size. 

The case of subject RS is described in the paper "Enhanced semantic memory in a case of highly superior autobiographical memory" which you can read here. The title refers to Highly Superior Autobiographical Memory or HSAM, a very rare ability in some people to recall  very well with extraordinary accuracy events and detail events in their lives. Such subjects are discussed in my posts here:

The Rare "Total Recall" Effect That Conflicts with Brain Dogmas

The paper describes the young subject RS in this way: 

"RS has vivid autobiographical memories that apparently arise automatically. At the same time, she describes her memories as being organised sequentially, so that to retrieve an event she mentally 'scans'  an ordered structure starting from the earliest memory and continuing to more recent memories until the correct entry is found, akin to scanning a mental timeline (Price & Mattingley, 2013)."

This is a striking description that seems to describe memory working different from the way memory works with the average person.  The paper says this about subject RS:

"The first source of her superior memory involved being able to name days of the week for any given calendar date since the year 2000 (e.g., 'What day was it on 2 March 2002?' Answer: Saturday). The second involved her seeming ability to remember the entire text, practically word for word, of the seven books of the Harry Potter series by J.K. Rowling."

Those who have studied the most impressive examples of human memory performance will recognize the calendar ability described, which is typically called "calendar calculating" (although it can occur so quickly that it seems beyond any explanation of calculation, as confessed in the scientific article here).  It has long been known that some humans have an extraordinary ability to accurately name the day of the week given a very old random date from years ago. Such people may be called "human calendars" or "calendar calculators."  One of the most famous examples is Kim Peek, the autistic savant who inspired the Tom Cruise movie Rain Man. At the 2:31 mark of the extremely interesting video here, we see an example of Kim Peek's calendar ability. At that part he is asked by the adult Daniel Tammet what day of the week was Daniel's birthday of January 31, 1979. Within five seconds Kim correctly answers that the date was a Wednesday.  The same ability was proven to exist as early as the 19th century. My post here on the case of Daniel McCartney quotes an interview in which a reporter repeatedly asked Daniel what day of the week corresponded to some random date the reporter selected, with the dates spanning many decades.  Daniel would give the correct date of the week very quickly when asked such questions, just as if no calculation was involved. 

To test the calendar calculation abilities of subject RS, the paper authors devised a test in which a computer screen would ask 60 times a question like this (with the dates varying in each of the 60 trials): "Which date is earlier in the week:  25 September 2005 or 2 March 2005?"  Answering a question like this at levels above chance requires two different successes in the mysterious "calendar calculating" ability, one that might yield the day of the week for the first date, and another that might yield the day of the week for the first date. The performance of subject RS on this task was compared to that of 10 controls, ordinary people with no special memory ability. 

This was the result of the experiment, verifying that subject RS did really have the ability of being able able to tell the day of the week corresponding to randomly selected dates:

"Overall mean accuracy for determining the earlier date within a week was 90% for RS (SD = 0.30) and 50% for controls (ranging from 43% to 55%; SD = 0.50). Overall median reaction time was 17.12 s for RS (SD = 7.01) and 1.24 s for controls (ranging from 0.51 to 18.85 s; SD = 10.64). Controls responded quickly because they were only able to guess on this task, unlike RS who deliberated carefully and was correct on 9 out of 10 trials."

So in 62 trials subject RS answered 90% correctly when asked questions such as "Which date is earlier in the week:  25 September 2005 or 2 March 2005?" -- a type of question that requires "day of the week" remembering or "day of the week" calculation for two random dates. The control subjects scored only at the chance level of 50%.

To test how well subject RS could recall text from the Harry Potter series of books, scientists had RS and some control subjects attempt to answer 30 questions such as the one shown below, which has quotes from a particular book in the Harry Potter series of books. 


The control subjects presumably all had knowledge of the Harry Potter series of books much better than the average person, because they are described in the paper as "aficionados of the Harry Potter series." The results were that subject RS got 97% of the 30 questions correct (29 out of 30), while the control subjects got 71% correct (better than the by-chance expected value of 50%, because the control subjects were 
aficionados of the Harry Potter series). The almost-perfect result of patient RS in this experiment tends to back up the author's claim that she had a "seeming ability to remember the entire text, practically word for word, of the seven books of the Harry Potter series by J.K. Rowling."

So we have in this subject RS several types of astounding memory ability, with RS having memory abilities of breathtaking power.  The same subject had an MRI scan of her brain. Did it reveal some anatomical anomaly that can explain the supernormal memory ability? Not at all.

Referring to regions of interest (ROIs0), which in this case means parts of the brain, the paper says, "High-resolution structural MRI scans revealed no volumetric or cortical thickness differences between RS and controls within any of the expected ROIs (i.e., hippocampus, amygdala, insula, temporal gyri and pole, subiculum, putamen)." In other words, she did not have some unusual brain anatomy that can explain her memory skills. 

In fact, the measured volume of the hippocampus of subject RS was below-average. The scientific paper tells us the total volume of her hippocampus (the sum of the volume on her hippocampus on the left side of her brain and the  hippocampus on the right side of her brain) was 4.731 cubic centimeters.  The scientific paper "HIPPOCAMPAL VOLUME AND SHAPE ANALYSIS IN AN OLDER ADULT POPULATION" attempted to judge the size of the hippocampus in 40 elderly adults.  Using "cc" to mean "cubic centimeters," the paper tells us this:  "Total ICV-adjusted volumes were 3.48 cc [cubic centimeters]  (±0.43) for the left hippocampus and 3.68 (±0.42) for the right hippocampus." That gives an average total hippocampus volume of 7.16 cubic centimeters for these 40 people.  According to Table 1 of the paper about her, subject RS had a "whole hippocampus" volume of 4.731 cubic centimeters; so her hippocampus was quite smaller than average. The same scientific paper on the 40 people tells us this: "There were no significant correlations between ICV-adjusted hippocampal volumes and age or memory performance (p>.05)."

So it seems we should not be surprised at all that this woman with the miracle memory had a hippocampus of below-average size, simply because there is no basis for any claim that hippocampus size has any major relation to memory performance, contrary to the misleading insinuations of people writing about the hippocampus.  

The term "hyperthymesia" is another term used for HSAM or Highly Superior Autobiographical Memory. A recent paper is entitled "Autobiographical hypermnesia as a particular form of mental time travel." In the abstract we read this:

"Here, we describe a case of hyperthymesia with an objective as well as a subjective assessment of mental time travel abilities in different temporal distances. This is the first observation of hyperthymesia with a full evaluation of mental time travel capacities in different temporal distances, encompassing the individual capacity to retrieve personal events from the personal past as well as to foresee personal events in the future."

Unfortunately the paper is behind a paywall. But an article on the story summarizes some of its details:

"The study focused on a 17-year-old girl, referred to as TL, who organizes her memories with unusual precision. She separates her memories into two types: ‘black memory,’ which is factual information learned in school without emotional significance, and her autobiographical memories, which she stores in a detailed mental framework.

TL describes her autobiographical memories as being stored in a ‘white room,’ where binders are organized by theme and date. In this mental space, she can review episodes from family life, vacations, friendships, or childhood experiences. Some memories are recalled as images or text messages."

Friday, September 12, 2025

Criticism of Overconfidence, Dubious Theorizing and Poor Performance Is Not Conspiracy Theorizing

The Wall Street Journal recently published a poorly-written article by Dan Kagan-Kans entitled "The Rise of 'Conspiracy Physics.'" Columbia University mathematician Peter Woit describes the article like this:

"The article is an excellent example of the sort of epistemic collapse we’re now living in. There’s zero intelligent content about the underlying scientific issues (is fundamental theoretical physics in trouble?), just a random collection of material about podcasts, written by someone who clearly knows nothing about the topic he’s writing about. The epistemic collapse is total when traditional high-quality information sources like the Wall Street Journal are turned over to uninformed writers getting their information from Joe Rogan podcasts. Any hope of figuring out what is true and what is false is now completely gone.

I was planning on writing something explaining what exactly the WSJ story gets wrong, but now realize this is hopeless (and I’m trying to improve my mental health this week, not make it worse). Sorting through a pile of misinformation, trying to rebuild something true out of a collapsed mess of some truth buried in a mixture of nonsense and misunderstandings is a losing battle."

Woit does not seem to have the time, inclination or energy to point out what the main fallacy in the article is. But I can do that. The main fallacy is its illegitimate use of the word "conspiracy." 

The article discusses some critics of scientific academia, and inappropriately labels such critics as conspiracy theorists. But the type of criticism discussed is not conspiracy thinking. It is mainly criticism of overconfidence, elitism, dubious theorizing and poor performance. 

The Cambridge Dictionary defines "conspiracy" like this: "the activity of secretly planning with other people to do something bad or illegal." What is a conspiracy theorist? A conspiracy theorist is someone who believes that some group has hatched one or more secret plots to do harm or do something illegal. 

The typical critic of scientific academia is not a conspiracy theorist. Such a critic does not believe that scientists met in secret to create some plan to do harm.  Instead, such a critic typically believes that some scientists were guilty of overconfidence, dogmatism, dubious theorizing or poor performance. If you suspect or believe that some group is misspeaking or bungling, that does not make you a conspiracy theorist. 

The article mentions criticism of string theory. Critics of string theory are not conspiracy theorists. String theory is a widely-touted theory in physics, one that has no evidence in its favor. String theorists made grand promises around 1990 that they were going to deliver a "theory of everything." But so far string theory has been a failure as a scientific theory. No observational results have supported it. String theorists were hoping very strongly that experimental results from a major scientific project (the Large Hadron Collider) would provide evidence in support of a theory called supersymmetry, a theory that string theory is built upon. No such evidence ever appeared.  The supersymmetry theory on which string theory is built flunked its observational test, and the "superpartner" particles it predicted were never found. 

It is not any type of conspiracy thinking to be critical of the poor performance and overconfidence of string theorists in academia. Such criticism does not involve any belief or suspicion about people formulating a secret plan to do harm. Similarly, it is not any type of conspiracy thinking to be critical of the poor performance and overconfidence of neuroscientists in academia, nor is it any type of conspiracy thinking to gather reasons for disbelieving in the dogmas taught by such neuroscientists. Such criticism does not involve any belief or suspicion about people formulating a secret plan to do harm or do something illegal.

What has happened is that Dan Kagan-Kan has engaged in a very lazy defamation of science academia critics. He has engaged in mudslinging by attempting to label such people as conspiracy theorists. This kind of lazy defamation is common these days. In the 1950's whenever you wanted to use the laziest and easiest way to defame some person calling for social progress, you might call such a person a communist, even when there was no evidence the person supported communism. Nowadays the  laziest and easiest way to defame some person criticizing the overconfidence, unjustified dogmatism and poor performance of some scientists is to call such a person a conspiracy theorist, even when there is no evidence that the person believes in a conspiracy. 

I myself am a critic of overconfidence, unjustified claims and poor performance of neuroscientists; but I am not any conspiracy theorist. I do not believe or suspect that neuroscientists ever formulated some secret plan to do anyone harm, or any type of secret plan. 

Another type of lazy libel people can make against critics of erring scientists is to call such critics "anti-science." 99% of the people called "anti-science" are actually pro-science, in that they are people who appreciate and applaud well-done science and good performance by scientists.  It is not "anti-science" to draw attention to misstatements and poor performance by some scientists, just as it is not "anti-baseball" to criticize some baseball players or baseball team for performing poorly. 

I consider myself very strongly pro-science, defining science as "facts established by observations" and "systematic efforts to establish facts by observations, experiments and analysis." I appreciate and applaud the work of scientists who do their jobs well, and avoid claiming to know things they do not know. I realize that much of the evidence basis behind this blog comes from the very good hard work of scientists who documented things such as brain physical shortfalls, people who performed well mentally with heavy brain damage, and cases of exceptional mental performance exceeding what should be possible from any brain. People who criticize misspeaking scientists and poorly performing scientists who use bad methods tend to actually be friends of science, just as testers who find bugs in software programs are friends of the software development process. 

What can be called science with a capital "S" is not the totality of what scientists believe in, but something very different: the totality of what scientists have proven. A person tending to take a "let's mainly stick to what scientists have proven" approach should not be called anti-science because of his lack of adherence to all of the belief traditions of scientists, many of which mainly involve belief community speech customs rather than beliefs mandated by proof. 

anti-science accusation

A sleazy tactic

Wednesday, September 10, 2025

Neuroscientists Financially Entangled With Biotech Device Makers Produce Flawed Studies

On the week I am writing this post auto-scheduled for later publication, I am observing what so often goes on in news reporting about science: a host of science news sites mindlessly parroting the unjustified claims of a university press release.  You would think that by now our science journalists might have learned the lesson that university press releases these days are often notorious for their hype, errors and misinformation, and that the claims of such press releases should be subjected to critical scrutiny. 

The unfounded claim this week is one that at-home magnetic brain stimulation "shows promise" for treating depression. The claims come from a Kings College press release here, which is entitled "At home brain stimulation for depression found to be safe and effective." We have a discussion of a new study testing at-home head gadgets using something called transcranial direct current stimulation (tDCS),

The press release states this:

"Transcranial direct current stimulation (tDCS) is a form of self-administered, non-invasive brain stimulation that applies a weak, direct current of between 0.5 to 2 milliampere to the scalp via two electrodes. It is not electroconvulsive therapy (ECT), which delivers about 800 milliamperes to the brain causing a generalised seizure and can only be conducted under strict supervision. 

174 participants aged 18 and over, and with a diagnosis of at least moderate depression were randomly assigned to one of two treatment arms; 'active' tDCS or 'inactive' tDCS which was the same device but did not provide a current. Participants had a 10-week course of treatment, initially having five 30-minute sessions a week for the first three weeks, followed by three 30-minute sessions a week for the following seven weeks.

Researchers found that participants in the active arm of the trial showed significant improvements in the severity of their depression, as well as the overall clinical response and remission compared to those in the ‘inactive’ placebo control arm. The rates of treatment response and remission were three times higher in the active treatment arm compared to the placebo arm, where 44.9% in the active arm demonstrated a remission rate compared to 21.8% of the control group."

A study like this attempts to follow the conventions of randomized controlled trials (RCT) using placebos.  In such studies patients are randomly assigned to either a medication group or a placebo group. Typically the medication group gets some medicine being tested, and the placebo group gets only a similar-looking placebo pill that is something like a sugar pill that looks just like the real medicine. The patient does not know which group he belongs to, and does not know  whether the pill he is being given is just a placebo sugar pill. Such a lack of knowledge is called an example of blinding. Such a randomized control trial attempts to see some medical outcome in the group getting the real medication that is superior to the medical outcome reported in the placebo group. When both a doctor or scientist and the patient are unaware (for at least part of the time)  whether particular patients belong to the get-the-real-treatment group or the placebo control group, such a study is called a "double-blind" study. 

The press release quoted above refers to the study here, entitled "Home-based transcranial direct current stimulation treatment for major depressive disorder: a fully remote phase 2 randomized sham-controlled trial." The study claims to have been a double-blind study, saying that it was a "fully remote, multisite, double-blind, placebo-controlled, randomized superiority trial."  But the claim about being double-blind is not correct. The patients who were supposed to be blind as to whether or not they were getting the real magnetic stimulation were  not really blind about such a thing. 

The failure of the blinding protocol to achieve real blindness (a lack of knowledge by subjects about whether they were receiving the real treatment) is shown by the section of the paper entitled "Analysis of study blinding and unblinding." There we read this:

"Before unblinding at week 10 (end of trial), participants were asked to guess whether they thought they were receiving the active or sham tDCS device and their level of certainty, rating from ‘1’ for ‘very uncertain’ to ‘5’ for to ‘very certain’. A guess of active tDCS was made by 77.6% in the active treatment arm and 59.3% in the sham treatment arm; the difference was significant (P = 0.01)."

This is a very big difference. 77% of the people getting the real treatment thought they were getting the real treatment, but only 59% of the people in the "no treatment" placebo group thought they were getting the real treatment.  This indicates a very major effect in which the people getting the real treatment were much more likely to think that they were getting the real treatment than those not getting the real treatment. In a randomized control trial that achieves an effective level of blinding, there will be no difference between the percentage of real-treatment subjects and placebo or sham treatment subjects who thought they were getting the real treatment. 

Analyzing the paper, it's easy to see why patients who were not getting the real treatment would have been been better able to figure out whether they were getting the real treatment. We read this:

"Active stimulation consisted of 2 mA direct current stimulation for 30 min with gradual ramp up over 120 s at the start and ramp down over 15 s at end of the session. Sham stimulation with the same device and app was used to resemble the active intervention and to receive the treatment schedule. An initial ramp up from 0 to 1 mA over 30 s then ramp down to 0 mA over 15 s was repeated at the end of the session to cause a tingling sensation that mimics active stimulation."

So the people getting the real treatment got 30 minutes of real direct current stimulation of their head, but the people getting the sham or placebo treatment got only about 30 seconds of real direct current stimulation. Such a big difference helps explains why a much higher percentage of those getting the real treatment thought they were getting the real treatment. 

That much higher percentage is enough to explain the difference in the reported relief of depression symptoms, even while assuming that the head device had zero real effectiveness in treating depression. Being much more likely to think that they had got some possibly effective  real brain zapping effect, the people in the real treatment group were much more likely to get a placebo effect that was largely the power of suggestion. Similarly, assign two groups of patients sugar-pill placebos, telling the first group that the pill is a powerful anti-depressant, and the second that the pill is just a vitamin, then the first group will probably report significant improvement in their depression, because of placebo effects and the power of suggestion. 

Nowadays biotech companies may have a very large incentive to fund and/or promote poorly designed and executed brain research, particularly whenever such research helps to promote some device that such companies sell. Biotech companies include very many companies such as the manufacturers of MRI devices, the manufacturers of EEG equipment, the manufacturers of implantable medical devices, and the manufacturers of noninvasive brain-related devices that a person may put on his head. The biotech industry is a trillion-dollar industry, and many billions of that involves brain-related products. 

Neuroscientists these days are thoroughly entangled with pharmaceutical companies and the manufacturers of biotech devices. The entanglement casts great doubt on the objectivity of any such scientists doing studies relating to the effectiveness of any brain-related medicine or any brain-related medical device. These entanglements and their pernicious effects on the objectivity of neuroscientists is discussed in my post "How the Academia-Cyberspace-Pharmaceutical-Biotech-Publishing Complex Incentivizes Bad Brain Research," which you can read here

The diagram below illustrates some of the entanglements and conflicts of interest going on:

who profits from bad research

Whenever you read a science paper that seems to promote some medicine or medical device, it is always a good idea to study the "Competing Interests" section of the paper, in which scientists are supposed to reveal possible conflicts of interest that may have affected their objectivity. When you examine that "Competing Interests" section of the paper being discussed, all kinds of alarm bells should go off. Here it is:

"C.H.Y.F. reports the following competing interests: research grant funding on behalf of the University of East London from Flow Neuroscience (no. R102696); research grant funding from NIMH (no. R01MH134236), the Baszucki Brain Research Fund Milken Institute (no. BD0000009), the Rosetrees Trust (no. CF20212104), the International Psychoanalytic Society (no. 158102845), the MRC (no. G0802594), NARSAD and the Wellcome Trust. She is Associate Editor of Psychoradiology and Section Editor of the Brain Research Bulletin. A.H.Y. reports the following competing interests: paid lectures and advisory boards for the following companies with therapies used in affective and related disorders: Flow Neuroscience, Novartis, Roche, Janssen, Takeda, Noema Pharma, Compass, AstraZeneca, Boehringer Ingelheim, Eli Lilly, LivaNova, Lundbeck, Sunovion, Servier, LivaNova, Janssen, Allegan, Bionomics, Sumitomo Dainippon Pharma, Sage, Novartis and Neurocentrx. He is principal investigator for the following studies: the Restore-Life VNS registry study funded by LivaNova; ESKETINTRD3004: ‘An Open-label, Long-term, Safety and Efficacy Study of Intranasal Esketamine in Treatment-resistant Depression’; The Effects of Psilocybin on Cognitive Function in Healthy Participants; The Safety and Efficacy of Psilocybin in Participants with Treatment-Resistant Depression (P-TRD); A Double-Blind, Randomized, Parallel-Group Study with Quetiapine Extended Release as Comparator to Evaluate the Efficacy and Safety of Seltorexant 20 mg as Adjunctive Therapy to Antidepressants in Adult and Elderly Patients with Major Depressive Disorder with Insomnia Symptoms Who Have Responded Inadequately to Antidepressant Therapy (Janssen); An Open-label, Long-term, Safety and Efficacy Study of Aticaprant as Adjunctive Therapy in Adult and Elderly Participants with Major Depressive Disorder (MDD) (Janssen); A Randomized, Double-blind, Multicentre, Parallel-group, Placebo-controlled Study to Evaluate the Efficacy, Safety, and Tolerability of Aticaprant 10 mg as Adjunctive Therapy in Adult Participants with Major Depressive Disorder (MDD) with Moderate-to-severe Anhedonia and Inadequate Response to Current Antidepressant Therapy; A Study of Disease Characteristics and Real-life Standard of Care Effectiveness in Patients with Major Depressive Disorder (MDD) With Anhedonia and Inadequate Response to Current Antidepressant Therapy Including an SSRI or SNR (Janssen). He is UK Chief Investigator for the following studies: Novartis MDD study no. MIJ821A12201; Compass; and the COMP006 and COMP007 studies. Grant funding (past and present) includes: NIMH (USA); CIHR (Canada); NARSAD (USA); the Stanley Medical Research Institute (USA); MRC (UK); the Wellcome Trust (UK); the Royal College of Physicians of Edinburgh; the British Medical Association (UK); the VGH & UBC Foundation (Canada); WEDC (Canada); the CCS Depression Research Fund (Canada); the Michael Smith Foundation for Health Research (Canada); NIHR (UK). Janssen (UK) and EU Horizon 2020. He is the Editor of the Journal of Psychopharmacology and Deputy Editor of BJPsych Open. He has no shareholdings in pharmaceutical companies. S.S. reports the following competing interests: research grant funding on behalf of the University of Texas Health Science Center at Houston from Flow Neuroscience; paid advisory boards for the following companies: Worldwide Clinical Trials and Inversago; and Vicore Pharma. He is a full-time employee of Intra-Cellular Therapies. He has received grants and research support from NIMH (USA) (no. 1R21MH119441-01A1), NIMH (no. 1R21MH129888-01A1), NICHD (no. 1R21HD106779-01A1), SAMHSA (no. 6H79FG000470-01M003) and Fizer foundation. He has received research funding as a principal investigator or study/subinvestigator from or participated as consultant/speaker for Flow Neuroscience, COMPASS Pathways, LivaNova, Janssen, Relmada and the Psychiatry Education Forum. Intra-Cellular Therapies or National Institutes of Health (NIH) or SAMHSA or any other organizations had no role in study design and conduct; the collection, management, analysis and interpretation of the data; the preparation, review or approval of the manuscript; and the decision to submit the manuscript for publication. The study’s content is solely the responsibility of the authors and does not necessarily represent the official views of the Intra-Cellular Therapies or NIH or SAMHSA. R.M-V. has received consulting fees from Eurofarma Pharmaceuticals, Abbott and BioStrategies group; has research contracts with Boerhinger Ingelheim and Janssen Pharmaceuticals; and has received speaker fees from Otsuka, EMS and Cristalia. He is a member of the scientific boards of Symbinas Pharmaceuticals and Allergan. He is also the principal investigator for the following grants: NIH (nos. R21HD106779 and R21MH129888), Milken Institute (no. BD-0000000081). D.M. and L.H. work for Biomedical Statistical Consulting; they provide statistical support to MCRA and received payments from Flow Neuroscience. A.-R.G.-N., G.S., H.H., J.C.S., M.R., N.L., P.J.L., P.O., R.D.W. and S.S.K. declare no competing interests."

What is this Flow Neuroscience we keep hearing about? It is a manufacturer of a head-zapping device marketed as an aid for depression -- exactly the type of device being promoted by this poorly designed and unconvincing study. We should not expect scientists taking lots of money from some manufacturer to produce unbiased studies evaluating a device of such a manufacturer. 

Under the assumptions of materialism, the idea that you could improve your health by zapping your brain with electricity or magnetism makes no sense. Under the assumptions of materialists, we might expect that zapping your brain might disrupt the "delicate brain mechanism of memory storage" that materialists believe in.  Luckily such devices probably don't harm your memories, because such memories  are not actually stored in your brain.  It is ironic for neuroscientists to be promoting devices doing brain-zapping that make no sense under the assumptions of such neuroscientists.  Maybe when they promote such devices, somehow they are recalling the utter failure of neuroscientists to present a credible theory of brain memory storage or brain instant recall, and the utter failure of neuroscientists to find any trace of learned human knowledge by microscopic examination of brain tissue. 

neuroscientist conflict of interest
They are "in bed with each other"