Saturday, May 2, 2026

"Behavioral Timescale Synaptic Plasticity" Is Not Any Well-Established Natural Reality

 Quanta Magazine is a widely-read online magazine with slick graphics. On topics of science the magazine again and again is guilty of the most glaring failures. Quanta Magazine often has assigned its online articles about great biology mysteries (involving riddles a thousand miles over the heads of PhDs) to writers identified as "writing interns."  The articles at Quanta Magazine often contain misleading prose, groundless boasts or glaring falsehoods. I discuss some examples of such poor journalism in my posts here and here and here and here.

The latest example of false news in Quanta Magazine is an article with the bogus headline "A New Type of Neuroplasticity Rewires the Brain After a Single Experience." The claim is BS,  pure baloney. Anyone familiar with the structure of the brain should instantly realize what nonsense this headline is. Neurons have fixed positions in the brain. Synapses are like roots in a dense forest, roots that lock trees into fixed positions in the forest (but instead of locking trees into their positions in a forest, synapses help lock neurons into their positions in the brain). Synapses are things almost as slow-changing as the roots of trees in a forest. So physically the idea that a brain could be instantly rewired is nonsense. 

The article starts out with this untrue claim: "Every experience we have changes our brain, the way a ceramicist reshapes a slab of clay." To the contrary, an experience does not change a brain. The analogy that the brain is like a lump of clay onto which impressions are written by experiences (like letters being written by the earliest cuneiform writers in Mesopotamia) is an extremely misleading analogy with no evidence to support it.  The brain has nothing like a stylus that could write such impressions. And no trace of any such impressions can be found. Microscopic examination of brain tissue (which has been done very abundantly) has never produced the slightest trace of anything  anyone learned. 

misleading brain analogy

We have this vacuous attempt to explain memory, not corresponding to any physical reality in the brain: "This plasticity, the quality of being easily reshaped, makes the brain really good at learning — a quintessential process that allows us to remember the plotline of a novel, navigate a new city, pick up a new language, and avoid touching a hot stove." It is not correct that brains are "easily reshaped," and it is not correct to suggest that brain structure changes after learning. Scan a brain before and after 8 hours of school learning, and you will see no difference. 

The writer then tells us a myth with no basis in fact, stating this:

"Recently, neuroscientists described a new form of neuroplasticity that might be helping the brain learn across a timescale of several seconds — long enough to capture the behavioral process of learning from a single experience. In two recent reviews, published in The Journal of Neuroscience (opens a new tab)

 and Nature Neuroscience(opens a new tab), they describe 'behavioral timescale synaptic plasticity,' or BTSP. This type of learning in the hippocampus, the brain’s memory hub, is caused by an electrical change that affects multiple neurons at once and unfolds across several seconds."

The claim that something called  "behavioral timescale synaptic plasticity," or BTSP is a "type of learning" is a claim without any basis in fact. Before looking at one of the reviews cited in the quote above (the one that is not behind a paywall), I must give some prefatory description of the social construction of discovery legends in today's neuroscience. 

Research in cognitive neuroscience research is dominated by low-quality studies. The study here concludes, "Our results indicate that the median statistical power in neuroscience is 21%." This is an abysmal number, an appalling figure. It has long been said that in experimental research, the goal should be a statistical power of 80%, which roughly corresponds to a likelihood of 80% that the result will be replicated.  A study with a statistical power of 21% is a low quality study that is likely to be announcing a false alarm. When a research field has a median statistical power of 21%, that means half of the studies have a statistical power of 21% or less.  If such an estimation is correct, it means the great majority of neuroscience studies report results that are unreliable or untrue. 

A neuroscientist wishing to gain fame and funding may do some low-quality research, and claim his research is a discovery of some new  effect for the first time. The neuroscientist may coin some name for this alleged effect, perhaps using some acronym. Whether that name is forgotten and never repeated may depend on whether other neuroscientists are willing to repeat the observational claim, and whether other scientists are willing to try to replicate the effect.  If a claim of a discovery "presses the buttons" of neuroscientists by claiming something neuroscientists are eager to see, the original observer's discovery claims may be repeated by other scientists. But it will often be the case that there is no sound warrant for either the original observational claim nor similar claims by other scientists. An original low-quality paper reporting some effect may use poor experimental methods, and equally poor experimental methods may be used by others who claim to see the same effect. 

Consequently we should never "take it for granted" that something is true, just because some scientific paper says that scientist X claimed to see such a thing, and that also scientist Y and scientist Z claimed to see it. The social construction of groundless triumphal legends is extremely common in neuroscience literature. The standards for getting a neuroscience paper published are low. Junk research is published every week, and low-quality experimental papers are published every month. So you must always go back to the papers being cited, and look at them critically, and ask: was their ever any decent evidence observed here?

Let's do that with one of the two papers cited by the Quanta Magazine article above, the one not behind a paywall. The paper is an extremely misleading review article entitled "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" which you can read here. The paper is short on descriptions of reliable observations of things naturally occurring and very long on triumphal narrations, most of which are groundless boasts hailing supposedly marvelous accomplishments of the most poorly designed and low-quality scientific studies. The paper's chief triumphal narration is the claim that something called behavioral timescale synaptic plasticity (BTSD) was discovered in 2017 by Katie C. Bittner and others. 

We hear these claims by the "burst in the field" paper about this BTSD:
  • "It is triggered by occasional dendritic plateau potentials associated with a burst of firing in the soma.Neurons fire unpredictably at a rate between about 1 time per second and 100 times per second, with their firing rates varying unpredictably. So anyone analyzing noisy, variable data on neuron firings might be able to find "occasional dendritic plateau potentials associated with a burst of firing in the soma," even if there is no such thing as BTSP. Similarly, anyone analyzing cloud formations sufficiently will be able to find occasional cloud clumps with this or that shape. 
  • "BTSP operates on the timescale of seconds rather than milliseconds and can therefore support associative learning over temporal delays relevant to behavior." If this alleged BTSP occurs quickly, that is no reason at all for thinking it has anything to do with any type of learning. 
  • "It leads to large changes in synaptic strength, enabling fast remodeling of neuronal representations that may support one-shot learning." There is no robust evidence for representations in either neurons or synapses.  The size and strength of synapses vary randomly over days and weeks. An increase in the strength of some synapses is never evidence that anything is being represented. And you cannot actually have "large changes in synaptic strength" being produced by anything operating on a timescale of seconds. There is no robust evidence that "large changes in synaptic strength" ever naturally occur on a timescale of seconds. 
The paper claims this: "A different kind of plasticity called behavioral timescale synaptic plasticity (BTSP) has recently been uncovered in area CA1 of the hippocampus (Bittner et al., 2017; Milstein et al., 2021) and has properties that appear to solve many of the aforementioned limitations of Hebbian plasticity (Magee and Grienberger, 2020)."  Bittner's 2017 paper supposedly first observing this BTSP alleged effect is behind a paywall. But we can look at this  2021 paper by Milstein, co-authored by Bittner.   It is a very low-quality paper that you can read here, one with the misleading title "Bidirectional synaptic plasticity rapidly
modifies hippocampal representations." There is no robust evidence for the representations claimed.  

This 2021 paper co-authored by Milstein and Bittner starts out by reciting many an unfounded legend and dubious dogma of neuroscientists. Then we have in Figure 1 some actual fresh observational data. The data is nothing remotely resembling compelling observational data. We have data from a single mouse that was on a treadmill. The graphs are not really hard data, because we have references to "place fields" that are social constructs of neuroscientists. Here the paper makes some claimed observational result that no one should take seriously or pay attention to unless it involved results with at least 15 or 20 animals per study group. But the study group consists of a measly one animal. The study group size is not even 10% of what it should be for reliable evidence to be claimed. 

Figure 2 is just as laughable as evidence. Since it has a caption of "mouse running," it seemingly also is data from a single mouse. Figure 3 fails to mention any study group size larger than 1.  We have a graph mentioning  "synaptic weights," one seeming to show some kind of increase. But the graph has no scale. No actual measurement of synaptic weights is occurring. Why of course -- synapses are things too tiny to be weighed with any accuracy. 

Later in the paper we have an indication that no reliable measurement of synaptic weights was occurring. We read, "We modeled changes in synaptic weights as a function of the time-varying amplitudes of these two biochemical intermediate signals, ET and IS." So apparently something else easier to measure was being measured, and the authors were engaging in the very dubious business of claiming that this other thing was some indication of synaptic weights.  That sounds rather like someone trying to deduce the weight of someone's meals by how much they spent on groceries this week -- not a reliable way of doing things. 

Nothing reliable is being done here to show that this claimed "Behavioral Timescale Synaptic Plasticity" naturally exists, or that it can produce rapid changes in synaptic strength. And even if you were to show such a thing, that would do nothing to explain instant learning, since changes in synaptic strength are not credible explanations of how newly learned information could be stored. For further evidence of the low quality of Wilstein and Bittner's 2021 paper "Bidirectional synaptic plasticity rapidly modifies hippocampal representations,"  we need merely search for whether a sample size calculation was done. The paper confesses, "Sample sizes were not determined by statistical methods."  Why of course. Since laughable, ridiculous sample sizes such as only one mouse were used (rather than decent study group sizes such as 15 or 20 mice per study group), the authors did not do a sample size calculation, which would have revealed some ridiculously low statistical power way, way below 25%. 

Questionable research practices in cognitive neuroscience

By citing Wilstein and Bittner's 2021 very low-quality paper "Bidirectional synaptic plasticity rapidly modifies hippocampal representations," the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" has given us another example of what constantly goes on in the dysfunctional world of neuroscience research:  paper authors citing very low-quality research as evidence of some effect they are arguing for, with the authors seeming to apply no critical scrutiny before citing a paper.  


The Quanta Magazine article and some of the papers cited by the papers (mentioned above) refer to a 2015 paper co-authored by Bittner and Jeffrey C. Magee, entitled "Conjunctive input processing drives feature selectivity in hippocampal CA1 neurons." While the study makes reference to a "normative" pool of 21 mice, the study's claims of detecting something are based on a way-too-small study group size of only 6 mice. It's another very low-quality paper failing to provide any decent evidence of "Behavioral Timescale Synaptic Plasticity." The authors confess that "no statistical methods were used to predetermine sample sizes," which is always a damning confession in a scientific paper of this type, kind of a "we were too lazy to act like good scientists" confession. But (paying no attention to quality factors) the Quanta Magazine senselessly treats the study as if was something important, and has a big photo of Magee. This is typical for Quanta Magazine, which seems to never pay any attention to whether neuroscience  studies are meeting the hallmarks of robust, well-designed science. 

The paper "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" has graphs from the year 2025 paper "Synaptic plasticity rules driving representational shifting in the hippocampus" that you can read here. That paper mainly refers vaguely to "mice" without mentioning exact study group sizes. But occasionally the paper does mention the exact study group sizes, which were way-too-small study group sizes such as only 4 mice, only 7 mice and only 9 mice. We read, "CA1 recordings were done in 4 mice, CA3 recordings in 7 mice and optogenetic experiments in 9 mice." These study group sizes were way-too-small for the paper to be taken as serious evidence of anything. No paper like this should be taken seriously unless 15 or 20 animals per study group were used. We have the damning confession in the paper that "No statistical method was used to predetermine sample size." If the paper authors had acted like good scientists by doing such a calculation, they would have found out how inadequate were the study group sizes they used. We also read, "Investigators were not blinded to CA1 or CA3 groups." This is a crucial defect for a paper like this. We have here a very low-quality example of a Questionable Research Practices study, one that fails to provide any good evidence for "Behavioral Timescale Synaptic Plasticity." 

So it seems the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" (which you can read here) is a paper that fails to cite any convincing studies showing any such phenomenon as "Behavioral Timescale Synaptic Plasticity." I reach this conclusion not based on a readership of all papers cited by that paper, but by looking at the studies discussed above, which were all very low-quality papers very badly guilty of Questionable Research Practices. The paper provides no robust evidence that scientists have demonstrated any such thing as any natural ability by which synapses could be instantly or very quickly strengthened. 

The review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" is a mere review article, not a systematic review. In scientific literature, systematic review articles are articles following a clear methodology in regard to which papers are to be cited as evidence, a quality filter clearly defined in the paper. A mere review article involves citing any papers that the authors wish to cite, without the papers being subjected to a quality filter stated in the paper. In today's neuroscience literature there is a plethora of misleading review articles citing poor-quality papers. 

review article versus systematic review

We get an indication in the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" that what is typically going on in the results it reports are not natural occurrences, but instead artificial occurrences produced by scientists doing special fiddling. In the article we read this: "The main approach, introduced by the Magee lab (Bittner et al., 2017), is to artificially induce BTSP with a long-lasting high–amplitude depolarization of the soma (typically a 300 ms 600 pA current injection) triggering a somatic CS (which is assumed to reflect a dendritic plateau), preceded or followed by synaptic activity within a few seconds time window (Fig. 3B)." 

So mainly what is being reported are artificial results produced by current injections -- experimenters zapping brains with electricity or electrical currents. The paper then says that this can be either done "in vitro" (that  is, using tissue detached from an organism's body) or "in vivo" (observing something inside a living organism). But we are told that the "in vivo" observations require artificial experimenter manipulations such as "current injections" or "optogenetic stimulations." Both are artificial types of brain zapping. Observations requiring such energy injections by experimenters  are not evidence of something naturally occurring in the brain. 

Bungling Neuroscientist

I propose the term "electromisrepresentation" to describe misleading narratives of this type. We can define electromisrepresentation as the artificial production of brain effects by methods such as electrical stimulation, combined with a misleading narrative trying to suggest that the resulting effects can explain natural human capabilities. Electromisrepresentation has massively occurred in discussion of so-called long-term potentiation or LTP. 

The Quanta Magazine article based on this scientific paper is also very misleading bunk, an article that attempts to persuade us of the existence of something for which there is no robust experimental evidence. A very bad example of groundless narration, the paper is full of untrue statements claiming magnificent accomplishments from scientists who actually ran very low-quality studies deserving mainly scorn because of multiple methodological sins such as the use of way-too-small study group sizes, the lack of a blinding protocol,  the lack of pre-registration, and an abundance of unreliable claims about the physical state of things (synapses) too small to have their physical state reliably measured. Occasionally the Quanta Magazine article has an indication of what baloney it is shoveling, such as when it says, "There’s still much unknown about BTSP, especially the mechanism, which Madar said is 'quite speculative.' ”

But that's par for the course in the untrustworthy world of today's neuroscience research, where neuroscientists these days boast like crazy about doing all kinds of wonderful things that were not actually done, because decent scientific procedures were never followed, and the experiments were so poorly designed and guilty of so many defects. 

There are two very strong reasons for rejecting all claims that there is good evidence that there are ever any quick natural increases in synaptic strength:
(1) The intrinsic unreliability of all attempts to measure the strength of synapses, given the incredibly small size of synapses, which makes all attempts to measure their strength dubious and unreliable. The largest parts of synapses (their clefts) are about 500 to 1000 times smaller than the largest part of a neuron (its soma or main body). 
(2) The intrinsic implausibility of any claims that synapses could naturally be quickly strengthened, given the fact that any synapse strengthening would require new protein synthesis, a process that takes minutes or hours. 

Scientists have never produced any credible tale to explain how either instant learning or learning of any type could occur in a brain, which has no components having any resemblance to a system for storing learned information. You would never show information storage or a storage of learned data or experiences by merely showing an increase in the strength of something.  No well-designed and robust scientific studies have ever produced any compelling evidence that learned information has been physically stored in brains. Claims about LTP arose from the type of artificial brain tissue zapping described above, with researchers ignoring that what was occurring did not correspond to natural events in the brain. Microscopic examination of brain tissue has never yielded the slightest trace of anything anyone learned or experienced -- not a single sentence, not a single word, not even a single character or letter or number or even a single pixel of anything anyone saw. 

Scientists have reliably determined that synapses are built of proteins that have average lifespans of only a few weeks, roughly a thousandth (1/1000, that is .001) of the maximum amount of time that humans can remember things, which is about sixty years. Besides utterly failing to explain how a brain could do memory storage, and how memories could persist for decades, scientists have utterly failed to explain how a brain could do instant memory retrieval, or memory retrieval of any type. We know the types of things that allow for instant retrieval of stored information: things such as addresses, indexing and sorting. No such things exist in the brain. The world of neuroscientist claims about memory is a world of fantasy and pareidolia, in which neuroscientists eagerly hoping to see things claim to see the faintest evidence of such things, like some wanderer in the desert eagerly scanning the far horizon five miles away and claiming to see water on the far horizon (although only a mirage is there, or only "see what you yearn to see" pareidolia is occurring). 

For a discussion of the very many ways that scientists have of conjuring up claims of things that don't exist, see my post here entitled  "Scientists Have a Hundred Ways To Conjure Up Phantasms That Don't Exist," and my post here entitled "The Social Construction of Eager Community Mirages."

Wednesday, April 29, 2026

Getting Billions, They Boasted They Would Get by 2025 a "Comprehensive Mechanistic Understanding of Mental Function"

 In recent years the two largest brain research projects have been a big US project launched in 2013 called the BRAIN Initiative, and a big European Union project launched in 2013 called the Human Brain Project. The BRAIN Initiative has received billions in funding, but has failed to fulfill the boasts it made about what it would do by the year 2025. 

More than seven years ago, the leaders of the BRAIN Initiative produced a document filled with hubris, one boasting about the grand and glorious things the project would achieve by the year 2025. The document was called Brain 2025: A Scientific Vision, and was offered at one of the project's two main web sites. You can read the document at the link here, and after going to that page you need to press on a + button (next to "Expand accordion content") to get the whole text. 

The “scientific vision” laid out in the document is largely an ideological vision, based on the unbelievable idea that the human mind is merely the product of the brain. The dubious ideology of the authors is made clear in the very first sentence of the document, in which the authors state, “The human brain is the source of our thoughts, emotions, perceptions, actions, and memories; it confers on us the abilities that make us human, while simultaneously making each of us unique.” It has certainly not been proven that any brain has ever generated a thought or stored a memory.

In fact, later in the document the authors confess, “We do not yet have a systematic theory of how information is encoded in the chemical and electrical activity of neurons, how it is fused to determine behavior on short time scales, and how it is used to adapt, refine, and learn behaviors on longer time scales.” This is certainly true. No one has anything like a systematic theory of how a brain could store memories as neural states, nor has anyone come up with anything like a systematic theory of how a brain could generate a thought. So why, then, did the document start out by stating that “the human brain is the source of our thoughts, emotions, perceptions, actions, and memories”? No one has any business making such a claim unless he first has “ a systematic theory of how information is encoded in the chemical and electrical activity of neurons.” But the document admits that no such theory exists.

But despite this one candid confession, the document was a writing of enormous hubris. The authors boasted that by the year 2025, the BRAIN Initiative would figure out how minds work. The document stated, "The most important outcome of the BRAIN Initiative will be a comprehensive, mechanistic understanding of mental function that emerges from synergistic application of the new technologies and conceptual structures developed under the BRAIN Initiative.” Notice how enormous is the predictive conceit of that statement, which sounds like a delusion of grandeur. The authors did not merely claim that they would "shed light" on how minds work, or that they would "get clues" as to how minds work. They boasted that their project would produce a  "comprehensive, mechanistic understanding of mental function." Making a boast as big as the sky. the authors predicted that their project would tell us how brains produce minds and their phenomena. 

What has been the result of the BRAIN Initiative? No great breakthroughs have occurred. The results (to use English slang) are "peanuts" or "chickenfeed." 

BRAIN Initiative

On the page here , we get a summary of the BRAIN Initiative's achievements in 2024.  None of it sounds like an achievement relevant to whether brains make minds, except for the claim that there was developed a " brain-computer interface that can convert brain waves into speech with minimal training."  We have a link to the page here, which makes the same claim. The claim is unfounded. The pages are referring to the study "Representation of internal speech by single neurons in human supramarginal gyrus." My post here explains why the study is not actually a demonstration of a "
brain-computer interface that can convert brain waves into speech." 

What's going on in the study is a reading of brain waves during very rapid switching between "speak it" instructions and "think it" instructions, with no care being taken to prevent subjects from speaking during the very short "think it" periods lasting only a few seconds.  We should assume that during many of the claimed "internal speech" intervals there were actually "audible speech" events, because of a failure of subjects to follow a very hard-to-perform protocol, one seemingly designed to produce such "failure to follow instructions" events.  Under such an assumption, the results can easily be explained, without assuming that there was occurring "converting of brain waves into speech." The second of the BRAIN Initiative pages given above boasts that "For one of the two participants, the BCI [brain-computer interface] could decode several words of their inner dialogue with 79% accuracy during an online task." These are meager tiny-sample-size results easily explained by chance or by assuming a difference in muscle movements that produce different types of brain waves, with supposed "inner dialogue" events often being verbal speaking events, as users failed to follow perfectly the hard-to-follow instructions involving rapid switching between speech and pure thinking. 

On the same BRAIN Initiative page we have another similar boast of a "brain-computer interface."  It is a reference to a paper which makes no claim  of picking up an "inner dialogue" involving no muscle movements. Instead some patient with a speech problem had electrodes implanted in his brain, and some system is picking up his attempts to speak different words. Such an attempt can produce limited success mainly because different types of speech efforts (involving slightly different muscle movements) may produce different types of EEG waves. Muscle movements or attempted muscle movements show up very noticeably in EEG brain wave readings; and distinctive types of muscle movements corresponding to particular speech sounds (phonemes) may produce distinctive blips in EEG readings. 

Studies like this (hailed as examples of "mind reading" from brains) typically involve a variety of shady tricks, such as getting inputs from more than just brain wave inputs, such as inputs from eye tracking devices, which make it easy to determine what word or picture on a screen someone is focusing on. 

The reality is that the BRAIN Initiative has failed to produce any results backing up in any weighty way claims that brains make minds and that brains store memories. You cannot actually detect what someone is thinking from analyzing mere brain waves. Studies claiming to do such a thing typically involve various types of dubious methodology and objectionable techniques.  A well-designed, fairly conducted and well-analyzed study will always show a failure to detect from brain waves alone what someone is thinking.

These were additional unfulfilled boasts of the document entitled Brain 2025: A Scientific Vision:
  • "We expect to discover new forms of neural coding as exciting as the discovery of place cells, and new forms of neural dynamics that underlie neural computations." So-called "place cells" were never actually discovered. The claim that they were discovered is one of the many groundless triumphal legends of neuroscientists, who have a high tendency to repeat "old wives' tales" of the belief community they belong to. Read my post here for a debunking of claims that "place cells" were ever observed  All that happened was that scientists observed some cells, and claimed that some cells were more active when some rats were in certain places. The studies were never examples of robust science, because they were guilty of various methodological sins such as using way-too-small study group sizes. No actual new form of neural coding was ever discovered by the BRAIN Initiative or any other scientific project or scientific study. And no one ever discovered "neural dynamics that underlie neural computations."
  • "Through deepened knowledge of how our brains actually work, we will understand ourselves differently, treat disease more incisively, educate our children more effectively, practice law and governance with greater insight, and develop more understanding of others whose brains have been molded in different circumstances." No such bonanza of benefits resulted from the BRAIN Initiative. Neuroscience has done nothing to improve the education of children, and done nothing to improve the practice of law or government.
  • "We must understand how circuits give rise to dynamic and integrated patterns of brain activity, and how these activity patterns contribute to normal and abnormal brain functions. Our expectation is that this approach will answer the deepest, most difficult questions about how the brain works, providing the basis for optimizing the health of the normal brain and inspiring novel therapies for brain disorders." The BRAIN Initiative wasted billions floundering around in this dead end, but did not answer any of the "most difficult questions about how the brain works," or any of the "most difficult questions about how the mind works."  Scientists still have no credible tale to tell of how a brain could think, imagine, instantly learn or instantly recall. 
  • " We expect The BRAIN Initiative® to develop new biological reagents, possibly including genetically-modified strains of rodents, fish, invertebrates, and non-human primates; recombinant viruses targeted to different brain cell types in different species; genetically-encoded indicators of different forms of neural activity; and genetic tools for modulating neuronal activity."  Here the scientists (sounding like eugenics enthusiasts) fall into Frankenstein folly by boasting about how they will monkey with the genes of various organisms, including rodents and primates. The hubris involved here should provoke the gravest concerns.  And anyone familiar with the very substantive suspicions that the COVID-19 virus might have arose from a lab leak should shudder at the proposed gene fiddling. 

Saturday, April 25, 2026

She Had Above-Average Intelligence With Only About 15% of Her Brain

 The failure of neuroscientists to adequately study minds is a very severe failure. You can get a PhD in neuroscience while making only a perfunctory study of human minds.  An examination of the courses required to get a Master's Degree in neuroscience will typically show that only one or two courses in psychology are required. Doing a neuroscience PhD dissertation typically involves some highly specialized research on some very narrow topic, research that does not require much in the way of additional study of human minds and the mental capabilities and mental experiences of humans.  The topic of human minds and human mental experiences is a topic of oceanic depth, requiring years of deep study for someone to get a good grasp of the full range of human mental states, human mental capabilities and human mental experiences. Very strangely, a typical neuroscientist is someone who will feel qualified to pontificate about what causes mental experiences, mental states and mental capabilities, even though he typically has done little to very deeply study mental experiences, mental states and mental capabilities.

Ask a neuroscientist to describe the best examples of high capacity and high accuracy in human memory recall, and you will be likely to get a shrug of the shoulders, or an answer that is wrong.  Ask a neuroscientist to describe the best examples of human performance in tests of extrasensory perception (ESP), and you will be likely to get a shrug of the shoulders, or an answer that is wrong. Ask a neuroscientist to describe the best examples of humans learning or memorizing things very quickly, and you will be likely to get an answer showing no study of such a topic. Ask a neuroscientist to describe the fastest examples of human calculation involving no use of any objects such as pencil, paper or blackboards, and you will likely get an answer that fails to describe the most impressive cases. 

Rather amazingly, it also seems true that most or very many neuroscientists are not very deep and very thorough scholars of the topic of human brains. A typical neuroscientist may be able to tell you in very great detail about some particular aspects of human brains, and may be able to tell you in the greatest detail about how to use some machine that is used to study brains. But the same neuroscientist may have failed to properly study the topic of human brains in a way that involves learning about every relevant thing you could about human brains. Ask that neuroscientist to tell you what happens when you remove half of a human brain, and you may get an answer that is wrong. Ask that neuroscientist to tell you how reliably chemical synapses transmit nerve signals (action potentials), and you may get an answer that is wrong. Ask that neuroscientist to tell you how quickly a brain electrically shuts down when the heart stops (reaching a state called asystole), and you may get an answer that is wrong. Ask that neuroscientist how quickly the average brain signal travels, and you will typically get an answer that is wrong, an answer failing to take into account all of the relevant factors such as the very strong slowing factor of cumulative synaptic delays, and the very strong slowing factor of the relatively slow transmission speed of dendrites

Part of the job of properly studying brains is to study very thoroughly all of the most impressive cases of high mental performance despite very high brain damage. Relatively few neuroscientists show signs of having studied such a topic. In order to properly study such a topic, you must study unusual medical case histories.  Very many of the most important and relevant medical case histories are recorded in books, newspapers and magazines. But can you ever recall reading of a neuroscientist searching newspapers for unusual case histories in neuroscience? 

Luckily there are some web sites that contain very many of the most relevant examples of such medical case histories that are relevant to the question of whether the human mind is the source of the mind and whether the human mind is the storage place of human memories. One of those sites is the very site you are reading.  In my series of posts labeled "High Mental Function Despite Large Brain Damage," which you can read here, I describe many of the most important case histories that are  relevant to the question of whether the human mind is the source of the mind. Now let me provide some more such cases. 

The first case involves a case of hydrocephalus. Hydrocephalus is a disease in which a brain has excessive watery fluid. In cases of hydrocephalus a brain may end up in a state that is mostly watery fluid. The brain scan of someone with severe hydrocephalus might look something like the schematic visual below. The black part in the middle is a watery fluid that has basically no neurons. 


The case of Sharon Parker is described in a 2003 news story entitled "Success of Nurse Who Lost Most of Her Brain." You can read the story here. We read this:

"When she was a baby, Sharon Parker's parents were told a rare and incurable condition meant she would not reach her fifth birthday.

She was left with only 15 per cent of her brain and there was little hope she could lead a normal life. But she defied the experts to become an astonishing success story.

Now 39, Mrs Parker is a nurse with a high IQ who is happily married with three children....She was diagnosed with congenital hydrocephalus - water on the brain - when she was nine months old. Doctors drained the liquid from her skull with a tube but her brain mass had been compacted in the outer edges of her skull, leaving a gaping hole in the middle....As a 16-year-old, she passed eight O-Levels and her IQ was later found to be 113, putting her in the top 20 per cent of the population.

The hydrocephalus has left her with a below-average short-term memory so she carries a notebook to remind herself to do things. However, tests have found that her long-term memory is better than average.

After leaving school, Mrs Parker decided to become a nurse and soon after starting her training, she met her future husband David, a builder who is now 45. The couple were married three years later and have three children...

She often participates in studies, including one recently in Ohio when she was examined by one of the world's leading experts on brain mass. Graham Teesdale, Professor of Neurosurgery at the University of Glasgow, said she demonstrated how adaptable the brain can be even when it is incomplete. 'She shows how the brain has an immense capacity to cope and adapt,' he said. 'Some people with the same acute problem experience problems in thought processes but others are able to function totally normally.' "

A materialist who believes that the brain is the source of the mind may wince after reading about this case history. But there is another hydrocephalus case that may be even better as evidence that brains do not make minds. Coincidentally, this case also involves someone named Parker, but someone other than Sharon Parker: a male with almost no brain. 

We read about the case on the page here

"Parker was born on September 9, 2008, with hydrocephalus, or excess fluid on the brain. Parker’s parents received the diagnosis at 20 weeks in the pregnancy that there was a blockage between the third and fourth ventricles of Parker’s brain, which was preventing the cerebrospinal fluid from draining into the body. As a result, the fluid would build up and compress Parker’s brain matter against his skull, making it almost non-existent, threatening to severely hinder Parker’s early neurological development. At birth, the average baby has 90–95% brain matter and 5–10% fluid within the cranial cavity; Parker had over 98% fluid and less than 2% brain matter, amounting to a mere 8 millimeters of brain matter at birth."

Later on the same page we read this

"He attends a special-needs kindergarten class, where he continues to thrive and demonstrate an inexplicable intellect and remarkable social skills.

Parker is truly a miracle – the child who once was thought may never walk or talk now plays, dances, sings, never stops talking (having never met a stranger) and hopes to one day become a sportscaster. Parker has far exceeded every expectation of his doctors and also adds being named the 2015 Ace All-Star to his long list of remarkable achievements."

Tuesday, April 21, 2026

The "Research Dystopia" of Dogma-Driven Neuroscience Experimentation

A dystopia is a fictional world in which things have gone horribly wrong. You might use the term "research dystopia" to describe certain fields of scientific research in which researchers are dedicated to proving untrue or implausible dogmas, by the use of poor methods of experimentation or analysis. Such a research dystopia is largely a world of fiction, in which false or implausible claims keep being repeated. In such a research dystopia, things have gone horribly wrong, because there is a predomination of poor techniques of scientific experimentation and scientific analysis.

Sadly, the field of research known as cognitive neuroscience research is a field you could call a research dystopia, without being too far off the mark. Such a field is a largely a world of fiction, in which researchers keep making untrue claims about brains being the source of minds and brains being the storage place of memories. And it is a world in which things have gone horribly wrong, because researchers keep churning out miserably designed studies guilty of various types of Questionable Research Practices. 

The latest evidence that cognitive neuroscience research is a research dystopia can be found in a press release on the clickbait-heavy site earth.com, and in the scientific paper that press release is promoting. The press release has the very untrue title "Scientists can now 'edit' brain circuits to enhance memory."  We read this very false claim: "New research shows that trimming specific synapses in a mouse brain circuit can strengthen memories and help them last longer." We read about some weird experiment in which scientists fiddled with synapses in the brains of a few mice. 

Making the untrue claim that a standard measure of memory was used (a claim that is untrue for reasons I will soon explain), the press release says this:

"Mice with edited hippocampal circuits froze more during recall tests, a standard memory measure.  With mild training, that advantage appeared two days after learning and remained 23 days later, strengthening both recent and long-term memory. With more intense training, the treated group held steady while controls faded, so the difference was not just a lucky one-off."

To help create the illusion that some reliable research was done, we have no mention of the number of mice used in the experiment. A look at the scientific paper being discussed by the press release gives us the answer to that question. The scientific paper is the very low-quality science paper here, one entitled "Remodeling synaptic connections via engineered neuron-astrocyte interactions."  In the scientific paper we read that the number of mice being tested was ridiculously low. The study group sizes were way-too-small study group sizes such as only 3 mice or only 6 mice. No study of this type should be taken seriously unless the study group sizes were at least 15 or 20 animals per study group. You do not have any decent evidence of a real effect if you merely use study group sizes of 6 animals per study group in a study comparing performance between altered mice and unaltered mice. It is way, way too easy to get a false alarm using a study group size so small. 

Below is a graph from the paper, found in Figure 8 of the paper:


This is what the paper is offering as its main evidence for a change in memory performance produced by the brain fiddling that the experimenters did. Each of the dots represents the claimed "freezing behavior" of one mouse in only one trial. By counting the number of circles, we can see that the study group sizes were only 6. 

The paper "Prevalence of Mixed-methods Sampling Designs in Social Science Research" has a Table 2 giving recommendations for minimum study group sizes for different types of research. The minimum number of subjects for an experimental study is 21 subjects per study group. 

minimum sample sizes

We simply cannot take seriously any study of this type using such a way-too-small study group of only six mice per study group. Using a study group size that small, it is way, way too easy to get a false alarm result, purely by chance. Similarly, if I do test of the effectiveness of rubbing a lucky rabbit foot charm in two groups of six people, and one of the groups report having better luck on the few days of the test, that is no decent evidence for the effectiveness of rubbing a rabbit's foot charm to increase luck. It is way, way too easy to get such a result from pure chance. 

Another reason why the reported result is worthless as evidence is that it used the utterly unreliable technique of trying to judge memory performance by judging the "freezing behavior" of mice. That technique is not a reliable technique for judging fear or recall in rodents, for reasons explained at length in my post here. 

The press release promoting this very low-quality paper makes the claim that a "a standard memory measure" was used. That is not correct. Although very often used in the dysfunctional world of rodent neuroscience research, the technique of attempting to measure "freezing behavior" in rodents is actually a technique that involves no standard measurement technique. The long appendix at the end of this post documents the utter lack of standards when such "freezing behavior" estimations occur. And when "freezing behavior" estimations occurs, it is not even memory that is being measured. What is being measured is what percent of some time interval a rodent is not moving. 

Neuroscientists love the technique of "freezing behavior" estimations, because it is a "see whatever you are hoping to see" type of technique, in which the desired positive result can almost always be claimed, by fiddling around with how the "freezing behavior" estimation occurs. The lack of any real standard in such estimations is only part of the reason why "freezing behavior" estimations are an utterly reliable technique for measuring fear or recall in rodents. 



We have in the very poor-quality paper "Remodeling synaptic connections via engineered neuron-astrocyte interactions" no decent evidence that manipulating synaptic connections has any effect on memory. The experimenters have used a study group size so low that it would not be good evidence of a memory change even if a reliable technique had been used to measure recall. And no such reliable technique for measuring recall has been used, but only the worthless  unreliable technique of attempting to judge "freezing behavior."  The authors might have discovered how way-too-small their study group sizes were if they had done a sample size calculation. But they make no mention of doing such a calculation.

Below are some quotes mentioning the use of too-small study group sizes and too-low statistical power in neuroscience studies. All references to underpowered studies are references to studies using too-small study group sizes. 

  • "Postmortem studies need n = 26 subjects to detect the same effect 80 % of the time, while MRI studies need n = 84 subjects; thus, most individual MRI studies and both postmortem studies were underpowered." (Link)
  • "The median neuroimaging study sample size is about 25...Reproducible brain-wide association studies require thousands of individuals." (Link)
  • "Critical appraisal indicated that studies were underpowered, did not match cases with controls and failed to account for confounding factors." (Link)
  • "Power calculations suggested that studies were underpowered." (Link)
  • "The small sample sizes of the current literature make it very likely that studies were underpowered, resulting in a host of issues such as imprecise association estimates, imprecise estimated effect sizes, low reproducibility, and reduced chances of detecting a true effect or, conversely, that 'detected' effects are indeed true." (Link)
  • "Most validation studies were underpowered and hence may have given a misleading impression of accuracy."  (Link)
  • "We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience." (Link)
The study here concludes, "Our results indicate that the median statistical power in neuroscience is 21%." This is an abysmal number, an appalling figure. It has long been said that in experimental research, the goal should be a statistical power of 80%, which roughly corresponds to a likelihood of 80% that the result will be replicated.  A study with a statistical power of 21% is a low quality study that is likely to be announcing a false alarm. When a research field has a median statistical power of 21%, that means half of the studies have a statistical power of 21% or less.  If such an estimation is correct, it means the great majority of neuroscience studies report results that are unreliable or untrue. 

The combination of very bad research practices and the enormous bias of researchers eagerly trying to prove old, untenable dogmas about brains makes the field of neuroscience experimentation something you might call a research dystopia,  a kind of experimental wasteland. 

Appendix:The Lack of Any Standards in "Freezing Behavior" Estimations

 A paper describing variations in how "freezing behavior" is judged reveals that no standard is being followed. The paper is entitled "Systematic Review and Methodological Considerations for the Use of Single Prolonged Stress and Fear Extinction Retention in Rodents." The paper has the section below telling us that statistical techniques to judge "freezing behavior" in rodents are "all over the map," with no standard statistical method being used:

"For example, studies using cued fear extinction retention testing with 10 cue presentations reported a variety of statistical methods to evaluate freezing during extinction retention. Within the studies evaluated, approaches have included the evaluation of freezing in individual trials, blocks of 2–4 trials, and subsets of trials separated across early and late phases of extinction retention. For example, a repeated measures analysis of variance (RMANOVA) of baseline and all 10 individual trials was used in Chen et al. (2018), while a RMANOVA was applied on 10 individual trials, without including baseline freezing, in Harada et al. (2008). Patterns of trial blocking have also been used for cued extinction retention testing across 10 trials, including blocks of 2 and 4 trials (Keller et al., 2015a). Comparisons within and across an early and late phase of testing have also been used, reflecting the secondary extinction process that occurs during extinction retention as animals are repeatedly re-exposed to the conditioned cue across the extinction retention trials. For example, an RMANOVA on trials separated into an early phase (first 5 trials) and late phase (last 5 trials) was used in Chen et al. (2018) and Chaby et al. (2019). Similarly, trials were averaged within an early and late phase and measured with separate ANOVAs (George et al., 2015). Knox et al. (2012a,b) also averaged trials within an early and late phase and compared across phases using a two factors design.

Baseline freezing, prior to the first extinction retention cue presentation, has been analyzed separately and can be increased by SPS (George et al., 2015) or not affected (Knox et al., 2012bKeller et al., 2015a). To account for potential individual differences in baseline freezing, researchers have calculated extinction indexes by subtracting baseline freezing from the average percent freezing across 10 cued extinction retention trials (Knox et al., 2012b). In humans, extinction retention indexes have been used to account for individual differences in the strength of the fear association acquired during cued fear conditioning (Milad et al., 20072009Rabinak et al., 2014McLaughlin et al., 2015) and the strength of cued extinction learning (Rabinak et al., 2014).

In contrast with the cued fear conditioning studies evaluated, some studies using contextual fear conditioning used repeated days of extinction training to assess retention across multiple exposures. In these studies, freezing was averaged within each day and analyzed with a RMANOVA or two-way ANOVA across days (Yamamoto et al., 2008Matsumoto et al., 2013Kataoka et al., 2018). Representative values for a trial day are generated using variable methodologies: the percentage of time generated using sampling over time with categorically handscoring of freezing (Kohda et al., 2007), percentage of time yielded by a continuous automated software (Harada et al., 2008), or total seconds spent freezing (Imanaka et al., 2006Iwamoto et al., 2007). Variability in data processing, trial blocking, and statistical analysis complicate meta-analysis efforts, such that it is challenging to effectively compare results of studies and generate effects size estimates despite similar methodologies."

As far as the techniques that are used to judge so-called "freezing behavior" in rodents, the techniques are "all over the map," with the widest variation between researchers. The paper tells us this:

"Another source of variability is the method for the detection of behavior during the trials (detailed in Table 1). Freezing behavior is quantified as a proxy for fear using manual scoring (36% of studies; 12/33), automated software (48% of studies; 16/33), or not specified in 5 studies (15%). Operational definitions of freezing were variable and provided in only 67% of studies (22/33), but were often explained as complete immobility except for movement necessary for respiration. Variability in freezing measurements, from the same experimental conditions, can derive from differential detection methods. For example, continuous vs. time sampling measurements, variation between scoring software, the operational definition of freezing, and the use of exclusion criteria (considerations detailed in section Recommendations for Freezing Detection and Data Analysis). Overall, 33% of studies did not state whether the freezing analysis was continuous or used a time sampling approach (11/33). Of those that did specify, 55% used continuous analysis and 45% used time sampling (12/33 and 10/33, respectively). Several software packages were used across the 33 studies evaluated: Anymaze (25%), Freezescan (14%), Dr. Rat Rodent's Behavior System (7%), Packwin 2.0 (4%), Freezeframe (4%), and Video Freeze (4%). Software packages vary in the level of validation for the detection of freezing and the number and role of automated vs. user-determined thresholds to define freezing. These features result in differential relationships between software vs. manually coded freezing behavior (Haines and Chuang, 1993Marchand et al., 2003Anagnostaras et al., 2010). Despite the high variability that can derive from software thresholds (Luyten et al., 2014), threshold settings are only occasionally reported (for example in fear conditioning following SPS). There are other software features that can also affect the concordance between freezing measure detected manually or using software, including whether background subtraction is used (Marchand et al., 2003) and the quality of the video recording (frames per second, lighting, background contrast, camera resolution, etc.; Pham et al., 2009), which were also rarely reported. These variables can be disseminated through published protocols, supplementary methods, or recorded in internal laboratory protocol documents to ensure consistency between experiments within a lab. Variability in software settings can determine whether or not group differences are detected (Luyten et al., 2014), and therefore it is difficult to assess the degree to which freezing quantification methods contribute to variability across SPS studies with the current level of detail in reporting. Meuth et al. (2013) tested the differences in freezing measurements across laboratories by providing laboratories with the same fear extinction videos to be evaluated under local conditions. They found that some discrepancies between laboratories in percent freezing detection reached 40% between observers, and discordance was high for both manual and automated freezing detection methods." 

It's very clear from the quotes above that once a neuroscience researcher has decided to use "freezing behavior" to judge the amount of fear or recall in mice, then he pretty much has a nice little "see whatever I want to see" situation. Since no standard protocol is being used in these estimations of so-called "freezing behavior," a neuroscientist can pretty much report exactly whatever he wants to see in regard to "freezing behavior," by just switching around the way in which "freezing behavior" is estimated, until the desired result appears. We should not make here the mistake of assuming that those using automated software for judging "freezing behavior" are getting objective results.  Most software has user-controlled options that a user can change to help him see whatever he wants to see. 

When "freezing behavior" judgments are made, there are no standards in regard to how long a length of time an animal should be observed when recording a "freezing percentage"  (a percentage of time the animal was immobile). An experimenter can choose any length of time between 30 seconds and five minutes or more (even though it is senseless to assume rodents might "freeze in fear" for as long as a minute).  Neuroscience experiments typically fail to pre-register experimental methods, leaving experimenters to make analysis choices "on the fly." So you can imagine how things work. An experimenter might judge how much movement occurred during five minutes or ten minutes after a rodent was exposed to a fear stimulus. If a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 30 seconds, then 30 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 60 seconds, then 60 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over two minutes, then two minutes would be chosen as the interval to be used for a "freezing percentage" graph. And so on and so forth, up until five minutes or ten minutes. If the researcher still has no "more freezing" effect he can report, the researcher can always do something like report on only the last minute of a larger time length, or the last two minutes, or the last three minutes, or the last four minutes. 

And also the researchers can arbitrarily choose what time length of immobility will be counted as some "freezing" to be added to the "freezing percentage" figure.  That time length of immobility can be 1 second or 2 seconds or any number of seconds between 1 and 10.

Because there are 20 or 30 or 50 different ways in which the data can be analyzed, each with about a 50% chance of success of yielding the desired result, the likelihood of the researcher being able to report some "higher freezing level" is almost certain, even if the tested interventions or manipulations had no real effect on memory. Such shenanigans drastically depart from good, honest, reliable experimental methods.