Tuesday, May 5, 2026

In 2016 Scientists Confessed Science Research Was Broken, and in 2026 Things Are No Better

 A 2016 story article in Vox.com was a remarkable confession of how bad things are in the world of scientific research. The article was entitled "The 7 biggest problems facing science, according to 270 scientists."  We read this:

"We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science....Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public."

The author writes about what is called publication bias, the tendency of science journals to prefer publishing studies that report finding some positive effect, rather than studies that fail to report such an effect, finding what is called a null result.  We read this: 

"Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase 'publish or perish'  hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side."

The statement won't be very clear to the average reader. What the author means by "studies that fail" is not studies that fail to follow good science practice or that fail to be completed, but instead studies that fail to report a positive result. An example would be a study testing for whether removing one particular gene from mice has an effect on their memory, and which reports no effect from such a removal. By "failed studies can mean career death" the author means studies that failed to report a positive effect and that then did not get published, because of publication bias in which positive results are preferred.  A scientist doing enough of such studies that were not published might end up with a low count of published papers. 

We read about conflicts of interest:

"Already, much of nutrition science, for instance, is funded by the food industry — an inherent conflict of interest. And the vast majority of drug clinical trials are funded by drugmakers. Studies have found that private industry–funded research tends to yield conclusions that are more favorable to the sponsors."

Such conflicts of interest taint neuroscience research, because a large fraction of neuroscience research is funded (directly or indirectly) by pharmaceutical companies and biotech device manufacturers hoping to produce some result they can claim is scientific evidence in favor of some pill or device they are selling.  We read in the article that some professors are spending up to 50% of their time writing research grant proposals. 

We get this quote:

"'As it stands, too much of the research funding is going to too few of the researchers,' writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. 'This creates a culture that rewards fast, sexy (and probably wrong) results.' ”

A culture that rewards wrong results? How messed up is that?

We read the following:

"The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more 'revolutionary.'  (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)

Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others....Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.

'I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,' writes Jess Kautz, a PhD student at the University of Arizona. 'And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work '.”

We read a quote from a Joseph Hilgard who says, "The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”  We read the claim that 85 percent of research is "routinely wasted on poorly designed and redundant studies."  We read  the claim that up to 30 percent of research turns out to be wrong or consist of exaggerated results. 

We read about how badly published results fail to be replicated. We have a big boldface section header saying this:

"Replicating results is crucial. But scientists rarely do it."

We get this example

"The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all."

We have this statement about misleading science journalism and misleading university press releases:

"Science journalism is often full of exaggerated, conflicting, or outright misleading claims...Sometimes bad stories are peddled by university press shops....Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice....The 'toxic dynamic' of journalists, academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step."

The long 2016 Vox article mentions some ways that this sorry state of science research could be improved.  But in the ten years since the article was published, there has been no improvement in the sorry state of science research. All the problems discussed in the 2016 article are still there, and still exist as badly as they existed in 2016.  There is no evidence that research scientists and science journalists are improving their dysfunctional and defective methods.  The many severe problems mentioned are only part of the problems that exist.  Many other severe problems in science research and science journalism are not mentioned in the Vox article, such as these:

(1) The tendency of scientific researchers to try to do research that supports prevailing dogmas of scientists, which are often groundless dogmas or poorly supported dogmas, rather than to do objective research that takes a "follow the evidence wherever it leads" approach. 

(2) The strong economic motivations that underlie misleading clickbait headlines, motivations such as the desire to produce page views that are profitable because of revenue-generating ads on such pages. 

(3) The use of way-too-small study group sizes in fields such as neuroscience, resulting mostly in unreliable "false alarm" results. 

(4) The use of poor methods of measurement in fields such as neuroscience, such as the widespread use of unreliable judgments of "freezing behavior" to judge fear or recall in rodents, rather than other more reliable methods. 

(5) A failure to follow a detailed blinding protocol.

(6) The extensive use of "keep torturing the data until it confesses" tactics, in which scientists fail to commit themselves to one straightforward method of gathering data and analyzing data, and instead act as if they had a license to endlessly play around with data, subjecting data to the most bizarre and convoluted arbitrary analysis pathways, that end up distorting and contorting the data gathered. 

smoke and mirrors neuroscience
What goes on in today's experimental neuroscience

When such problems exist in abundance, neuroscientists are largely engaging in a sham and a scam when they take federal money and pretend to be engaging in rigorous experimental science. 

junk neuroscience

Research science and science journalism are broke, and there is no sign that they are slowly mending themselves. 

Questionable Research Practices

A recent article on the Retraction Watch site is captured in the screen shot below. Notice the graph showing that the growth of fake or shoddy "paper mill" papers is stronger than the growth of regular scientific papers. 

paper mill fraud

So-called "paper mills" are for-profit companies that offer "editorial services" to scientists, which can range from writing much of a paper to writing an entire paper. The outputs of paper mills are very often fake papers and programmatically generated papers in which everything is written by computer programs, programs that steal most of their text from other papers. Such paper mills make heavy use of AI programs such as ChatGPT, which often give wrong answers or low-quality text sometimes called "AI slop." Someone can ask a program such as ChatGPT to generate a hypothetical paper sounding like the typical paper published in some line of research. The same person might then try to get the AI-generated paper published as a real paper. Or someone might ask a system like ChatGPT to write a summary of research on some narrow topic.  That person might then try to get the paper published as a "review article" or "systematic review" or "meta-analysis." Many scientists unwilling to get involved with shady paper mills are using AI systems such as ChatGPT to write much of the text of their papers. The integrity and credibility of the paper may be compromised by the use of AI-generated text not written by any mind understanding the topic discussed. 


As Exhibit A to back up the claim that the state of scientific research may be worsening, I may offer a recent article by Ross Andersen in The Atlantic, one entitled, "Science Is Drowning in AI Slop." We read this, referring to the "large-language models" used by so-called artificial intelligence or AI:

"Almost immediately after large language models went mainstream, manuscripts started pouring into [scientific] journal inboxes in unprecedented numbers. Some portion of this effect can be chalked up to AI’s ability to juice productivity, especially among non-English-speaking scientists who need help presenting their research. But ChatGPT and its ilk are also being used to give fraudulent or shoddy work a new veneer of plausibility, according to Mandy Hill, the managing director of academic publishing at Cambridge University Press & Assessment. That makes the task of sorting wheat from chaff much more time-consuming for editors and referees, and also more technically difficult."

Because of the AI slop problem, the state of science research may be even worse today than the very bad state described in the 2016 Vox article. 

Saturday, May 2, 2026

"Behavioral Timescale Synaptic Plasticity" Is Not Any Well-Established Natural Reality

 Quanta Magazine is a widely-read online magazine with slick graphics. On topics of science the magazine again and again is guilty of the most glaring failures. Quanta Magazine often has assigned its online articles about great biology mysteries (involving riddles a thousand miles over the heads of PhDs) to writers identified as "writing interns."  The articles at Quanta Magazine often contain misleading prose, groundless boasts or glaring falsehoods. I discuss some examples of such poor journalism in my posts here and here and here and here.

The latest example of false news in Quanta Magazine is an article with the bogus headline "A New Type of Neuroplasticity Rewires the Brain After a Single Experience." The claim is BS,  pure baloney. Anyone familiar with the structure of the brain should instantly realize what nonsense this headline is. Neurons have fixed positions in the brain. Synapses are like roots in a dense forest, roots that lock trees into fixed positions in the forest (but instead of locking trees into their positions in a forest, synapses help lock neurons into their positions in the brain). Synapses are things almost as slow-changing as the roots of trees in a forest. So physically the idea that a brain could be instantly rewired is nonsense. 

The article starts out with this untrue claim: "Every experience we have changes our brain, the way a ceramicist reshapes a slab of clay." To the contrary, an experience does not change a brain. The analogy that the brain is like a lump of clay onto which impressions are written by experiences (like letters being written by the earliest cuneiform writers in Mesopotamia) is an extremely misleading analogy with no evidence to support it.  The brain has nothing like a stylus that could write such impressions. And no trace of any such impressions can be found. Microscopic examination of brain tissue (which has been done very abundantly) has never produced the slightest trace of anything  anyone learned. 

misleading brain analogy

We have this vacuous attempt to explain memory, not corresponding to any physical reality in the brain: "This plasticity, the quality of being easily reshaped, makes the brain really good at learning — a quintessential process that allows us to remember the plotline of a novel, navigate a new city, pick up a new language, and avoid touching a hot stove." It is not correct that brains are "easily reshaped," and it is not correct to suggest that brain structure changes after learning. Scan a brain before and after 8 hours of school learning, and you will see no difference. 

The writer then tells us a myth with no basis in fact, stating this:

"Recently, neuroscientists described a new form of neuroplasticity that might be helping the brain learn across a timescale of several seconds — long enough to capture the behavioral process of learning from a single experience. In two recent reviews, published in The Journal of Neuroscience (opens a new tab)

 and Nature Neuroscience(opens a new tab), they describe 'behavioral timescale synaptic plasticity,' or BTSP. This type of learning in the hippocampus, the brain’s memory hub, is caused by an electrical change that affects multiple neurons at once and unfolds across several seconds."

The claim that something called  "behavioral timescale synaptic plasticity," or BTSP is a "type of learning" is a claim without any basis in fact. Before looking at one of the reviews cited in the quote above (the one that is not behind a paywall), I must give some prefatory description of the social construction of discovery legends in today's neuroscience. 

Research in cognitive neuroscience research is dominated by low-quality studies. The study here concludes, "Our results indicate that the median statistical power in neuroscience is 21%." This is an abysmal number, an appalling figure. It has long been said that in experimental research, the goal should be a statistical power of 80%, which roughly corresponds to a likelihood of 80% that the result will be replicated.  A study with a statistical power of 21% is a low quality study that is likely to be announcing a false alarm. When a research field has a median statistical power of 21%, that means half of the studies have a statistical power of 21% or less.  If such an estimation is correct, it means the great majority of neuroscience studies report results that are unreliable or untrue. 

A neuroscientist wishing to gain fame and funding may do some low-quality research, and claim his research is a discovery of some new  effect for the first time. The neuroscientist may coin some name for this alleged effect, perhaps using some acronym. Whether that name is forgotten and never repeated may depend on whether other neuroscientists are willing to repeat the observational claim, and whether other scientists are willing to try to replicate the effect.  If a claim of a discovery "presses the buttons" of neuroscientists by claiming something neuroscientists are eager to see, the original observer's discovery claims may be repeated by other scientists. But it will often be the case that there is no sound warrant for either the original observational claim nor similar claims by other scientists. An original low-quality paper reporting some effect may use poor experimental methods, and equally poor experimental methods may be used by others who claim to see the same effect. 

Consequently we should never "take it for granted" that something is true, just because some scientific paper says that scientist X claimed to see such a thing, and that also scientist Y and scientist Z claimed to see it. The social construction of groundless triumphal legends is extremely common in neuroscience literature. The standards for getting a neuroscience paper published are low. Junk research is published every week, and low-quality experimental papers are published every month. So you must always go back to the papers being cited, and look at them critically, and ask: was their ever any decent evidence observed here?

Let's do that with one of the two papers cited by the Quanta Magazine article above, the one not behind a paywall. The paper is an extremely misleading review article entitled "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" which you can read here. The paper is short on descriptions of reliable observations of things naturally occurring and very long on triumphal narrations, most of which are groundless boasts hailing supposedly marvelous accomplishments of the most poorly designed and low-quality scientific studies. The paper's chief triumphal narration is the claim that something called behavioral timescale synaptic plasticity (BTSD) was discovered in 2017 by Katie C. Bittner and others. 

We hear these claims by the "burst in the field" paper about this BTSD:
  • "It is triggered by occasional dendritic plateau potentials associated with a burst of firing in the soma.Neurons fire unpredictably at a rate between about 1 time per second and 100 times per second, with their firing rates varying unpredictably. So anyone analyzing noisy, variable data on neuron firings might be able to find "occasional dendritic plateau potentials associated with a burst of firing in the soma," even if there is no such thing as BTSP. Similarly, anyone analyzing cloud formations sufficiently will be able to find occasional cloud clumps with this or that shape. 
  • "BTSP operates on the timescale of seconds rather than milliseconds and can therefore support associative learning over temporal delays relevant to behavior." If this alleged BTSP occurs quickly, that is no reason at all for thinking it has anything to do with any type of learning. 
  • "It leads to large changes in synaptic strength, enabling fast remodeling of neuronal representations that may support one-shot learning." There is no robust evidence for representations in either neurons or synapses.  The size and strength of synapses vary randomly over days and weeks. An increase in the strength of some synapses is never evidence that anything is being represented. And you cannot actually have "large changes in synaptic strength" being produced by anything operating on a timescale of seconds. There is no robust evidence that "large changes in synaptic strength" ever naturally occur on a timescale of seconds. 
The paper claims this: "A different kind of plasticity called behavioral timescale synaptic plasticity (BTSP) has recently been uncovered in area CA1 of the hippocampus (Bittner et al., 2017; Milstein et al., 2021) and has properties that appear to solve many of the aforementioned limitations of Hebbian plasticity (Magee and Grienberger, 2020)."  Bittner's 2017 paper supposedly first observing this BTSP alleged effect is behind a paywall. But we can look at this  2021 paper by Milstein, co-authored by Bittner.   It is a very low-quality paper that you can read here, one with the misleading title "Bidirectional synaptic plasticity rapidly
modifies hippocampal representations." There is no robust evidence for the representations claimed.  

This 2021 paper co-authored by Milstein and Bittner starts out by reciting many an unfounded legend and dubious dogma of neuroscientists. Then we have in Figure 1 some actual fresh observational data. The data is nothing remotely resembling compelling observational data. We have data from a single mouse that was on a treadmill. The graphs are not really hard data, because we have references to "place fields" that are social constructs of neuroscientists. Here the paper makes some claimed observational result that no one should take seriously or pay attention to unless it involved results with at least 15 or 20 animals per study group. But the study group consists of a measly one animal. The study group size is not even 10% of what it should be for reliable evidence to be claimed. 

Figure 2 is just as laughable as evidence. Since it has a caption of "mouse running," it seemingly also is data from a single mouse. Figure 3 fails to mention any study group size larger than 1.  We have a graph mentioning  "synaptic weights," one seeming to show some kind of increase. But the graph has no scale. No actual measurement of synaptic weights is occurring. Why of course -- synapses are things too tiny to be weighed with any accuracy. 

Later in the paper we have an indication that no reliable measurement of synaptic weights was occurring. We read, "We modeled changes in synaptic weights as a function of the time-varying amplitudes of these two biochemical intermediate signals, ET and IS." So apparently something else easier to measure was being measured, and the authors were engaging in the very dubious business of claiming that this other thing was some indication of synaptic weights.  That sounds rather like someone trying to deduce the weight of someone's meals by how much they spent on groceries this week -- not a reliable way of doing things. 

Nothing reliable is being done here to show that this claimed "Behavioral Timescale Synaptic Plasticity" naturally exists, or that it can produce rapid changes in synaptic strength. And even if you were to show such a thing, that would do nothing to explain instant learning, since changes in synaptic strength are not credible explanations of how newly learned information could be stored. For further evidence of the low quality of Wilstein and Bittner's 2021 paper "Bidirectional synaptic plasticity rapidly modifies hippocampal representations,"  we need merely search for whether a sample size calculation was done. The paper confesses, "Sample sizes were not determined by statistical methods."  Why of course. Since laughable, ridiculous sample sizes such as only one mouse were used (rather than decent study group sizes such as 15 or 20 mice per study group), the authors did not do a sample size calculation, which would have revealed some ridiculously low statistical power way, way below 25%. 

Questionable research practices in cognitive neuroscience

By citing Wilstein and Bittner's 2021 very low-quality paper "Bidirectional synaptic plasticity rapidly modifies hippocampal representations," the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" has given us another example of what constantly goes on in the dysfunctional world of neuroscience research:  paper authors citing very low-quality research as evidence of some effect they are arguing for, with the authors seeming to apply no critical scrutiny before citing a paper.  


The Quanta Magazine article and some of the papers cited by the papers (mentioned above) refer to a 2015 paper co-authored by Bittner and Jeffrey C. Magee, entitled "Conjunctive input processing drives feature selectivity in hippocampal CA1 neurons." While the study makes reference to a "normative" pool of 21 mice, the study's claims of detecting something are based on a way-too-small study group size of only 6 mice. It's another very low-quality paper failing to provide any decent evidence of "Behavioral Timescale Synaptic Plasticity." The authors confess that "no statistical methods were used to predetermine sample sizes," which is always a damning confession in a scientific paper of this type, kind of a "we were too lazy to act like good scientists" confession. But (paying no attention to quality factors) the Quanta Magazine senselessly treats the study as if was something important, and has a big photo of Magee. This is typical for Quanta Magazine, which seems to never pay any attention to whether neuroscience  studies are meeting the hallmarks of robust, well-designed science. 

The paper "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" has graphs from the year 2025 paper "Synaptic plasticity rules driving representational shifting in the hippocampus" that you can read here. That paper mainly refers vaguely to "mice" without mentioning exact study group sizes. But occasionally the paper does mention the exact study group sizes, which were way-too-small study group sizes such as only 4 mice, only 7 mice and only 9 mice. We read, "CA1 recordings were done in 4 mice, CA3 recordings in 7 mice and optogenetic experiments in 9 mice." These study group sizes were way-too-small for the paper to be taken as serious evidence of anything. No paper like this should be taken seriously unless 15 or 20 animals per study group were used. We have the damning confession in the paper that "No statistical method was used to predetermine sample size." If the paper authors had acted like good scientists by doing such a calculation, they would have found out how inadequate were the study group sizes they used. We also read, "Investigators were not blinded to CA1 or CA3 groups." This is a crucial defect for a paper like this. We have here a very low-quality example of a Questionable Research Practices study, one that fails to provide any good evidence for "Behavioral Timescale Synaptic Plasticity." 

So it seems the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" (which you can read here) is a paper that fails to cite any convincing studies showing any such phenomenon as "Behavioral Timescale Synaptic Plasticity." I reach this conclusion not based on a readership of all papers cited by that paper, but by looking at the studies discussed above, which were all very low-quality papers very badly guilty of Questionable Research Practices. The paper provides no robust evidence that scientists have demonstrated any such thing as any natural ability by which synapses could be instantly or very quickly strengthened. 

The review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" is a mere review article, not a systematic review. In scientific literature, systematic review articles are articles following a clear methodology in regard to which papers are to be cited as evidence, a quality filter clearly defined in the paper. A mere review article involves citing any papers that the authors wish to cite, without the papers being subjected to a quality filter stated in the paper. In today's neuroscience literature there is a plethora of misleading review articles citing poor-quality papers. 

review article versus systematic review

We get an indication in the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" that what is typically going on in the results it reports are not natural occurrences, but instead artificial occurrences produced by scientists doing special fiddling. In the article we read this: "The main approach, introduced by the Magee lab (Bittner et al., 2017), is to artificially induce BTSP with a long-lasting high–amplitude depolarization of the soma (typically a 300 ms 600 pA current injection) triggering a somatic CS (which is assumed to reflect a dendritic plateau), preceded or followed by synaptic activity within a few seconds time window (Fig. 3B)." 

So mainly what is being reported are artificial results produced by current injections -- experimenters zapping brains with electricity or electrical currents. The paper then says that this can be either done "in vitro" (that  is, using tissue detached from an organism's body) or "in vivo" (observing something inside a living organism). But we are told that the "in vivo" observations require artificial experimenter manipulations such as "current injections" or "optogenetic stimulations." Both are artificial types of brain zapping. Observations requiring such energy injections by experimenters  are not evidence of something naturally occurring in the brain. 

Bungling Neuroscientist

I propose the term "electromisrepresentation" to describe misleading narratives of this type. We can define electromisrepresentation as the artificial production of brain effects by methods such as electrical stimulation, combined with a misleading narrative trying to suggest that the resulting effects can explain natural human capabilities. Electromisrepresentation has massively occurred in discussion of so-called long-term potentiation or LTP. 

The Quanta Magazine article based on this scientific paper is also very misleading bunk, an article that attempts to persuade us of the existence of something for which there is no robust experimental evidence. A very bad example of groundless narration, the paper is full of untrue statements claiming magnificent accomplishments from scientists who actually ran very low-quality studies deserving mainly scorn because of multiple methodological sins such as the use of way-too-small study group sizes, the lack of a blinding protocol,  the lack of pre-registration, and an abundance of unreliable claims about the physical state of things (synapses) too small to have their physical state reliably measured. Occasionally the Quanta Magazine article has an indication of what baloney it is shoveling, such as when it says, "There’s still much unknown about BTSP, especially the mechanism, which Madar said is 'quite speculative.' ”

But that's par for the course in the untrustworthy world of today's neuroscience research, where neuroscientists these days boast like crazy about doing all kinds of wonderful things that were not actually done, because decent scientific procedures were never followed, and the experiments were so poorly designed and guilty of so many defects. 

There are two very strong reasons for rejecting all claims that there is good evidence that there are ever any quick natural increases in synaptic strength:
(1) The intrinsic unreliability of all attempts to measure the strength of synapses, given the incredibly small size of synapses, which makes all attempts to measure their strength dubious and unreliable. The largest parts of synapses (their clefts) are about 500 to 1000 times smaller than the largest part of a neuron (its soma or main body). 
(2) The intrinsic implausibility of any claims that synapses could naturally be quickly strengthened, given the fact that any synapse strengthening would require new protein synthesis, a process that takes minutes or hours. 

Scientists have never produced any credible tale to explain how either instant learning or learning of any type could occur in a brain, which has no components having any resemblance to a system for storing learned information. You would never show information storage or a storage of learned data or experiences by merely showing an increase in the strength of something.  No well-designed and robust scientific studies have ever produced any compelling evidence that learned information has been physically stored in brains. Claims about LTP arose from the type of artificial brain tissue zapping described above, with researchers ignoring that what was occurring did not correspond to natural events in the brain. Microscopic examination of brain tissue has never yielded the slightest trace of anything anyone learned or experienced -- not a single sentence, not a single word, not even a single character or letter or number or even a single pixel of anything anyone saw. 

Scientists have reliably determined that synapses are built of proteins that have average lifespans of only a few weeks, roughly a thousandth (1/1000, that is .001) of the maximum amount of time that humans can remember things, which is about sixty years. Besides utterly failing to explain how a brain could do memory storage, and how memories could persist for decades, scientists have utterly failed to explain how a brain could do instant memory retrieval, or memory retrieval of any type. We know the types of things that allow for instant retrieval of stored information: things such as addresses, indexing and sorting. No such things exist in the brain. The world of neuroscientist claims about memory is a world of fantasy and pareidolia, in which neuroscientists eagerly hoping to see things claim to see the faintest evidence of such things, like some wanderer in the desert eagerly scanning the far horizon five miles away and claiming to see water on the far horizon (although only a mirage is there, or only "see what you yearn to see" pareidolia is occurring). 

For a discussion of the very many ways that scientists have of conjuring up claims of things that don't exist, see my post here entitled  "Scientists Have a Hundred Ways To Conjure Up Phantasms That Don't Exist," and my post here entitled "The Social Construction of Eager Community Mirages."

Wednesday, April 29, 2026

Getting Billions, They Boasted They Would Get by 2025 a "Comprehensive Mechanistic Understanding of Mental Function"

 In recent years the two largest brain research projects have been a big US project launched in 2013 called the BRAIN Initiative, and a big European Union project launched in 2013 called the Human Brain Project. The BRAIN Initiative has received billions in funding, but has failed to fulfill the boasts it made about what it would do by the year 2025. 

More than seven years ago, the leaders of the BRAIN Initiative produced a document filled with hubris, one boasting about the grand and glorious things the project would achieve by the year 2025. The document was called Brain 2025: A Scientific Vision, and was offered at one of the project's two main web sites. You can read the document at the link here, and after going to that page you need to press on a + button (next to "Expand accordion content") to get the whole text. 

The “scientific vision” laid out in the document is largely an ideological vision, based on the unbelievable idea that the human mind is merely the product of the brain. The dubious ideology of the authors is made clear in the very first sentence of the document, in which the authors state, “The human brain is the source of our thoughts, emotions, perceptions, actions, and memories; it confers on us the abilities that make us human, while simultaneously making each of us unique.” It has certainly not been proven that any brain has ever generated a thought or stored a memory.

In fact, later in the document the authors confess, “We do not yet have a systematic theory of how information is encoded in the chemical and electrical activity of neurons, how it is fused to determine behavior on short time scales, and how it is used to adapt, refine, and learn behaviors on longer time scales.” This is certainly true. No one has anything like a systematic theory of how a brain could store memories as neural states, nor has anyone come up with anything like a systematic theory of how a brain could generate a thought. So why, then, did the document start out by stating that “the human brain is the source of our thoughts, emotions, perceptions, actions, and memories”? No one has any business making such a claim unless he first has “ a systematic theory of how information is encoded in the chemical and electrical activity of neurons.” But the document admits that no such theory exists.

But despite this one candid confession, the document was a writing of enormous hubris. The authors boasted that by the year 2025, the BRAIN Initiative would figure out how minds work. The document stated, "The most important outcome of the BRAIN Initiative will be a comprehensive, mechanistic understanding of mental function that emerges from synergistic application of the new technologies and conceptual structures developed under the BRAIN Initiative.” Notice how enormous is the predictive conceit of that statement, which sounds like a delusion of grandeur. The authors did not merely claim that they would "shed light" on how minds work, or that they would "get clues" as to how minds work. They boasted that their project would produce a  "comprehensive, mechanistic understanding of mental function." Making a boast as big as the sky. the authors predicted that their project would tell us how brains produce minds and their phenomena. 

What has been the result of the BRAIN Initiative? No great breakthroughs have occurred. The results (to use English slang) are "peanuts" or "chickenfeed." 

BRAIN Initiative

On the page here , we get a summary of the BRAIN Initiative's achievements in 2024.  None of it sounds like an achievement relevant to whether brains make minds, except for the claim that there was developed a " brain-computer interface that can convert brain waves into speech with minimal training."  We have a link to the page here, which makes the same claim. The claim is unfounded. The pages are referring to the study "Representation of internal speech by single neurons in human supramarginal gyrus." My post here explains why the study is not actually a demonstration of a "
brain-computer interface that can convert brain waves into speech." 

What's going on in the study is a reading of brain waves during very rapid switching between "speak it" instructions and "think it" instructions, with no care being taken to prevent subjects from speaking during the very short "think it" periods lasting only a few seconds.  We should assume that during many of the claimed "internal speech" intervals there were actually "audible speech" events, because of a failure of subjects to follow a very hard-to-perform protocol, one seemingly designed to produce such "failure to follow instructions" events.  Under such an assumption, the results can easily be explained, without assuming that there was occurring "converting of brain waves into speech." The second of the BRAIN Initiative pages given above boasts that "For one of the two participants, the BCI [brain-computer interface] could decode several words of their inner dialogue with 79% accuracy during an online task." These are meager tiny-sample-size results easily explained by chance or by assuming a difference in muscle movements that produce different types of brain waves, with supposed "inner dialogue" events often being verbal speaking events, as users failed to follow perfectly the hard-to-follow instructions involving rapid switching between speech and pure thinking. 

On the same BRAIN Initiative page we have another similar boast of a "brain-computer interface."  It is a reference to a paper which makes no claim  of picking up an "inner dialogue" involving no muscle movements. Instead some patient with a speech problem had electrodes implanted in his brain, and some system is picking up his attempts to speak different words. Such an attempt can produce limited success mainly because different types of speech efforts (involving slightly different muscle movements) may produce different types of EEG waves. Muscle movements or attempted muscle movements show up very noticeably in EEG brain wave readings; and distinctive types of muscle movements corresponding to particular speech sounds (phonemes) may produce distinctive blips in EEG readings. 

Studies like this (hailed as examples of "mind reading" from brains) typically involve a variety of shady tricks, such as getting inputs from more than just brain wave inputs, such as inputs from eye tracking devices, which make it easy to determine what word or picture on a screen someone is focusing on. 

The reality is that the BRAIN Initiative has failed to produce any results backing up in any weighty way claims that brains make minds and that brains store memories. You cannot actually detect what someone is thinking from analyzing mere brain waves. Studies claiming to do such a thing typically involve various types of dubious methodology and objectionable techniques.  A well-designed, fairly conducted and well-analyzed study will always show a failure to detect from brain waves alone what someone is thinking.

These were additional unfulfilled boasts of the document entitled Brain 2025: A Scientific Vision:
  • "We expect to discover new forms of neural coding as exciting as the discovery of place cells, and new forms of neural dynamics that underlie neural computations." So-called "place cells" were never actually discovered. The claim that they were discovered is one of the many groundless triumphal legends of neuroscientists, who have a high tendency to repeat "old wives' tales" of the belief community they belong to. Read my post here for a debunking of claims that "place cells" were ever observed  All that happened was that scientists observed some cells, and claimed that some cells were more active when some rats were in certain places. The studies were never examples of robust science, because they were guilty of various methodological sins such as using way-too-small study group sizes. No actual new form of neural coding was ever discovered by the BRAIN Initiative or any other scientific project or scientific study. And no one ever discovered "neural dynamics that underlie neural computations."
  • "Through deepened knowledge of how our brains actually work, we will understand ourselves differently, treat disease more incisively, educate our children more effectively, practice law and governance with greater insight, and develop more understanding of others whose brains have been molded in different circumstances." No such bonanza of benefits resulted from the BRAIN Initiative. Neuroscience has done nothing to improve the education of children, and done nothing to improve the practice of law or government.
  • "We must understand how circuits give rise to dynamic and integrated patterns of brain activity, and how these activity patterns contribute to normal and abnormal brain functions. Our expectation is that this approach will answer the deepest, most difficult questions about how the brain works, providing the basis for optimizing the health of the normal brain and inspiring novel therapies for brain disorders." The BRAIN Initiative wasted billions floundering around in this dead end, but did not answer any of the "most difficult questions about how the brain works," or any of the "most difficult questions about how the mind works."  Scientists still have no credible tale to tell of how a brain could think, imagine, instantly learn or instantly recall. 
  • " We expect The BRAIN Initiative® to develop new biological reagents, possibly including genetically-modified strains of rodents, fish, invertebrates, and non-human primates; recombinant viruses targeted to different brain cell types in different species; genetically-encoded indicators of different forms of neural activity; and genetic tools for modulating neuronal activity."  Here the scientists (sounding like eugenics enthusiasts) fall into Frankenstein folly by boasting about how they will monkey with the genes of various organisms, including rodents and primates. The hubris involved here should provoke the gravest concerns.  And anyone familiar with the very substantive suspicions that the COVID-19 virus might have arose from a lab leak should shudder at the proposed gene fiddling. 

Saturday, April 25, 2026

She Had Above-Average Intelligence With Only About 15% of Her Brain

 The failure of neuroscientists to adequately study minds is a very severe failure. You can get a PhD in neuroscience while making only a perfunctory study of human minds.  An examination of the courses required to get a Master's Degree in neuroscience will typically show that only one or two courses in psychology are required. Doing a neuroscience PhD dissertation typically involves some highly specialized research on some very narrow topic, research that does not require much in the way of additional study of human minds and the mental capabilities and mental experiences of humans.  The topic of human minds and human mental experiences is a topic of oceanic depth, requiring years of deep study for someone to get a good grasp of the full range of human mental states, human mental capabilities and human mental experiences. Very strangely, a typical neuroscientist is someone who will feel qualified to pontificate about what causes mental experiences, mental states and mental capabilities, even though he typically has done little to very deeply study mental experiences, mental states and mental capabilities.

Ask a neuroscientist to describe the best examples of high capacity and high accuracy in human memory recall, and you will be likely to get a shrug of the shoulders, or an answer that is wrong.  Ask a neuroscientist to describe the best examples of human performance in tests of extrasensory perception (ESP), and you will be likely to get a shrug of the shoulders, or an answer that is wrong. Ask a neuroscientist to describe the best examples of humans learning or memorizing things very quickly, and you will be likely to get an answer showing no study of such a topic. Ask a neuroscientist to describe the fastest examples of human calculation involving no use of any objects such as pencil, paper or blackboards, and you will likely get an answer that fails to describe the most impressive cases. 

Rather amazingly, it also seems true that most or very many neuroscientists are not very deep and very thorough scholars of the topic of human brains. A typical neuroscientist may be able to tell you in very great detail about some particular aspects of human brains, and may be able to tell you in the greatest detail about how to use some machine that is used to study brains. But the same neuroscientist may have failed to properly study the topic of human brains in a way that involves learning about every relevant thing you could about human brains. Ask that neuroscientist to tell you what happens when you remove half of a human brain, and you may get an answer that is wrong. Ask that neuroscientist to tell you how reliably chemical synapses transmit nerve signals (action potentials), and you may get an answer that is wrong. Ask that neuroscientist to tell you how quickly a brain electrically shuts down when the heart stops (reaching a state called asystole), and you may get an answer that is wrong. Ask that neuroscientist how quickly the average brain signal travels, and you will typically get an answer that is wrong, an answer failing to take into account all of the relevant factors such as the very strong slowing factor of cumulative synaptic delays, and the very strong slowing factor of the relatively slow transmission speed of dendrites

Part of the job of properly studying brains is to study very thoroughly all of the most impressive cases of high mental performance despite very high brain damage. Relatively few neuroscientists show signs of having studied such a topic. In order to properly study such a topic, you must study unusual medical case histories.  Very many of the most important and relevant medical case histories are recorded in books, newspapers and magazines. But can you ever recall reading of a neuroscientist searching newspapers for unusual case histories in neuroscience? 

Luckily there are some web sites that contain very many of the most relevant examples of such medical case histories that are relevant to the question of whether the human mind is the source of the mind and whether the human mind is the storage place of human memories. One of those sites is the very site you are reading.  In my series of posts labeled "High Mental Function Despite Large Brain Damage," which you can read here, I describe many of the most important case histories that are  relevant to the question of whether the human mind is the source of the mind. Now let me provide some more such cases. 

The first case involves a case of hydrocephalus. Hydrocephalus is a disease in which a brain has excessive watery fluid. In cases of hydrocephalus a brain may end up in a state that is mostly watery fluid. The brain scan of someone with severe hydrocephalus might look something like the schematic visual below. The black part in the middle is a watery fluid that has basically no neurons. 


The case of Sharon Parker is described in a 2003 news story entitled "Success of Nurse Who Lost Most of Her Brain." You can read the story here. We read this:

"When she was a baby, Sharon Parker's parents were told a rare and incurable condition meant she would not reach her fifth birthday.

She was left with only 15 per cent of her brain and there was little hope she could lead a normal life. But she defied the experts to become an astonishing success story.

Now 39, Mrs Parker is a nurse with a high IQ who is happily married with three children....She was diagnosed with congenital hydrocephalus - water on the brain - when she was nine months old. Doctors drained the liquid from her skull with a tube but her brain mass had been compacted in the outer edges of her skull, leaving a gaping hole in the middle....As a 16-year-old, she passed eight O-Levels and her IQ was later found to be 113, putting her in the top 20 per cent of the population.

The hydrocephalus has left her with a below-average short-term memory so she carries a notebook to remind herself to do things. However, tests have found that her long-term memory is better than average.

After leaving school, Mrs Parker decided to become a nurse and soon after starting her training, she met her future husband David, a builder who is now 45. The couple were married three years later and have three children...

She often participates in studies, including one recently in Ohio when she was examined by one of the world's leading experts on brain mass. Graham Teesdale, Professor of Neurosurgery at the University of Glasgow, said she demonstrated how adaptable the brain can be even when it is incomplete. 'She shows how the brain has an immense capacity to cope and adapt,' he said. 'Some people with the same acute problem experience problems in thought processes but others are able to function totally normally.' "

A materialist who believes that the brain is the source of the mind may wince after reading about this case history. But there is another hydrocephalus case that may be even better as evidence that brains do not make minds. Coincidentally, this case also involves someone named Parker, but someone other than Sharon Parker: a male with almost no brain. 

We read about the case on the page here

"Parker was born on September 9, 2008, with hydrocephalus, or excess fluid on the brain. Parker’s parents received the diagnosis at 20 weeks in the pregnancy that there was a blockage between the third and fourth ventricles of Parker’s brain, which was preventing the cerebrospinal fluid from draining into the body. As a result, the fluid would build up and compress Parker’s brain matter against his skull, making it almost non-existent, threatening to severely hinder Parker’s early neurological development. At birth, the average baby has 90–95% brain matter and 5–10% fluid within the cranial cavity; Parker had over 98% fluid and less than 2% brain matter, amounting to a mere 8 millimeters of brain matter at birth."

Later on the same page we read this

"He attends a special-needs kindergarten class, where he continues to thrive and demonstrate an inexplicable intellect and remarkable social skills.

Parker is truly a miracle – the child who once was thought may never walk or talk now plays, dances, sings, never stops talking (having never met a stranger) and hopes to one day become a sportscaster. Parker has far exceeded every expectation of his doctors and also adds being named the 2015 Ace All-Star to his long list of remarkable achievements."