Wednesday, May 13, 2026

They Keep Giving Awards for Weak Neuroscience Research

 It is a pitiful thing when an award goes to research that is poorly designed and follows poor research practices. Whenever such an award is given, it sends a message to today's neuroscientists, a message rather like this:

You can follow Questionable Research Practices, and get away with it. Not only can you get away with it, you may even get a prestigious award for doing some poorly designed piece of schlock work that no scientific journal with high standards should have even published. 

In my posts here, here and here, I gave examples of inappropriate neuroscience awards that were given for low-quality neuroscience research. Now we have another example, from the site StatNews.com.

Statnews.com is a site that tries to create an aura of a serious, respectable science news site. It bills itself as "your go-to source for the world of life sciences, medicine, and biopharma." My guess is that the site gets funding from pharmaceutical companies and medical device manufacturers, and that it exists largely to serve their interests. On the site's pages we don't get the usual swarm of ads that you see these days on so-called science news site; but there are some ads. You should always be suspicious of any science news site containing ads.  Every time you see an ad on the pages of Statnews.com, you should remember that online science news sites containing ads will tend to have clickbait headlines that drive people to click on headlines, so that they go see ads that make the site owners money

The statnews.com site has some prize it calls the STAT Madness Editors Pick. This year the award has gone to Maiken Nedergaard for research relating to brains flushing out waste, research found in the paper here.  We read about the awarding of the prize in the article here, which you will only be able to read part of, without signing up for something. But I can read the full article on my I-Pad, without doing the sign-up. There I learned the prize was awarded for the paper "Norepinephrine-mediated slow vasomotion drives glymphatic clearance during sleep," which you can read here.  It is a low-quality paper because of its way-too-small study group sizes. 

It is usually easy to find what study group sizes were used in a research paper. You just do a search in the text of the paper for the phrase "n=" or "n =" which gives study group sizes. Sometimes such a search will fail to tell you the study group size, and you must take additional steps such as these:

(1)  Go to each of the figures in the study, and click on each of the "Expand caption" links, to get the full text of the captions. Then look in the text of such captions, for phrases such as "n=" or "n =" in the text. 

(2) If this still fails to give you the study group size, you may need to take the additional step of searching for a phrase such as "mice" or "rats" or "humans" or "subjects" in the text of the paper. 

Doing that in this paper (which requires us to take the first step above because of a hiding of details in captions that require clicking to see), we find that the study group sizes were very low. The full caption of Figure 1 tells us that only 3 mice and 5 mice and 6 mice were used to get some of the results shown in that figure. The full caption of Figure S1 tells us  that only 5 mice and 3 mice were used to get some of the results shown in that figure.  The full caption of Figure 2 tells us  that only 3 mice or 7 mice were used to get some of the results shown in that figure.  The full caption of Figure S2 tells us  that only 3 mice were used to get some of the results shown in that figure. Clicking on Figure 3 tells us of study group sizes of only 5-8 mice. Similar way-too-small study group sizes are mentioned in the captions for the other figures. For example, Figure 4 mentions study group sizes of only 4 mice for some of the experiments. 

study sizes in rodent reserach

Did the authors do a sample size calculation to determine whether they used adequate study group sizes? They make no mention of doing such a thing. There's pretty much only one reason why the authors of such a study would fail to do such a sample size calculation: because they knew or suspected that the sample sizes that they were using were way too small. The authors do not mention observing any large effect size, so we may presume any observed effects would have been no more than medium or small. Effect sizes in neuroscience research are almost always small, so the number of required mice would have been at least 15 to 20, much more than the very small number of mice that were used. 

We have no mention of any blinding protocol, which means that any difference between the control group and the experimental group may be merely due to bias of those analyzing the groups, bias from observers who knew which was the experimental group and which was the control group. 

The paper is largely devoted to trying to prove some utility for a drug call zolpidem (also called Ambien). When papers serve to promote a particular drug,  the research is typically paid for by the manufacturer of the drug, to help promote sales of the drug. Was that going on here? You cannot tell. There is a mention of a bunch of grants that helped to fund the research, but it is a funding trail too complex to unravel. We do hear that one of the authors is a paid consultant for "CNS2," which is not identified. 

It seems what we have here is some low-quality experimental work failing to use study group sizes even half as large as should have been used. So why did Statnews.com award this research an annual prize? What does it say about the state of neuroscience research these days, when experimental studies this weak get prizes? 

Saturday, May 9, 2026

Philosophers Appealing to Zombies Are Firing Blanks

Thinkers in the philosophy of mind have wasted too much ink on the topic of philosophical zombies, which has been the subject of many an article and essay.  The first mention of a zombies argument against materialism (using that term zombies)  occurred in the paper "Zombies v. Materialism" by Robert Kirk and J. E. R Squires. Kirk made an argument that was not very clearly stated. Later in his work The Conscious Mind David Chalmers stated the argument more clearly. 

On page 94 of that work we have section entitled "Argument 1: The Logical Possibility of Zombies."  Here is some of the reasoning by Chalmers:

"The most obvious way (although not the only way) to investigate the logical supervenience of consciousness is to consider the logical possibility of a zombie: someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether. At the global level, we can consider the logical possibility of a zombie world: a world physically identical to ours, but in which there are no conscious experiences at all. In such a world, everybody is a zombie. So let us consider my zombie twin. This creature is molecule-for-molecule identical to me, and indeed identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely....The most obvious way (although not the only way) to investigate the logical supervenience of consciousness is to consider the logical possibility of a zombie: someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether. At the global level, we can consider the logical possibility of a zombie world: a world physically identical to ours, but in which there are no conscious experiences at all. In such a world, everybody is a zombie. So let us consider my zombie twin. This creature is molecule-for-molecule identical to me, and indeed identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely....What is going on in my zombie twin? He is physically identical to me, and we may as well suppose that he is embedded in an identical environment. He will certainly be identical to me functionally: he will be processing the same sort of information, reacting in a similar way to inputs, with his internal configurations being modified appropriately and with indistinguishable behavior resulting. He will be psychologically identical to me, in the sense developed in Chapter 1. He will be perceiving the trees outside, in the functional sense, and tasting the chocolate, in the psychological sense. All of this follows logically from the fact that he is physically identical to me, by virtue of the functional analyses of psychological notions."

What Chalmers has said here should be enough for us to lose any confidence in what he saying. He is imagining some zombie twin of himself that is unconscious (he says " lacking conscious experiences altogether") but who he claims is "psychologically identical to me." That is nonsense. Psychology is the study of the human mind. If you had a zombie twin that was unconscious, such a being would not at all be "psychologically identical" to you. 

It is clear from the above description that the "philosophical zombie" imagined by Chalmers is not simply some unmoving body with the same structure as his body, but also someone acting as he acts "reacting in a similar way to inputs." But an unconscious being would never react in a similar way to inputs as a conscious human does. So there could never be such a "philosophical zombie" that is unconscious but acting like a human and having a human body. 

Chalmers then gives us some Darwin-style reasoning using the "I see no difficulties here" kind of language that Charles Darwin loved to use when suggesting the most fantastically improbable claims. Chalmers says this:

"Arguing for a logical possibility is not entirely straightforward. How, for example, would one argue that a mile-high unicycle is logically possible? It just seems obvious. Although no such thing exists in the real world, the description certainly appears to be coherent. If someone objects that it is not logically possible—it merely seems that way— there is little we can say, except to repeat the description and assert its obvious coherence. It seems quite clear that there is no hidden contradiction lurking in the description. I confess that the logical possibility of zombies seems equally obvious to me. A zombie is just something physically identical to me, but which has no conscious experience—all is dark inside. While this is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description. In some ways an assertion of this logical possibility comes down to a brute intuition, but no more so than with the unicycle. Almost everybody, it seems to me, is capable of conceiving of this possibility."

This is extremely bad reasoning. There is nothing intrinsically illogical or incoherent about the idea of mile-high unicycle. It is simply an impractical thing that no one has ever built. But the philosophical zombie that Chalmers is suggesting is an impossible, incoherent idea. People act the way they do largely because they are conscious human beings who have the kind of mental experiences that humans have. If you remove human consciousness, no one would ever act in most of the more interesting ways that a human being acts. It is impossible that there could ever exist a physical being having exactly the same physical characteristics and behavior that a human has, but also no consciousness. The idea is as incoherent and impossible as a triangle that is missing one of its three sides. Take out one of the three sides of a triangle, and it is not a triangle, but merely an angle. Take away the consciousness of a human, and he would never act in any of the more complex ways that humans act. 

In the paragraph above when Chalmers says "a zombie is just something physically identical to me," he is contradicting his early sketch of his philosophical zombie, in which he described such a zombie as not just someone physically identical to him, but also behaving the same as he behaves "with indistinguishable behavior resulting." So first Chalmers has described his hypothetical zombie as someone physically identical to himself and also behaving the same; and then later he tries to make that idea sound not too unbelievable by saying "a zombie is just something physically identical to me," mentioning only half of what he had previously mentioned.   

There certainly is an extremely gigantic contradiction in the philosophical zombie that Chalmers describes.  I will give a simple example illustrating why. Imagine a normal human walking down the street sees a lion walking towards him, a lion that escaped from the zoo. The normal human will have a conscious experience of recognition that causes him to sense the danger; and as a result he will flee or hide. But let us imagine a philosophical zombie encountering  such a lion on the path ahead of him. Not being conscious, there could occur no recognition at all for such a philosophical zombie. Nor could such a philosophical zombie ever have such a feeling as fear, for you cannot fear something when you are unconscious.  And the philosophical zombie could not make a decision to hide or flee, because you can't decide something when you are unconscious. So it is obviously nonsense to imagine such a philosophical zombie behaving as a human would behave in such a situation. 

I could provide endless similar examples. How could Chalmers have gone so wrong? Perhaps he made the mistake of thinking that consciousness is a kind of luxury that can occur in addition to things such as recognition, recollection, decision-making, speech, reading, writing, and so forth. Instead consciousness is a prerequisite for such things.  You cannot have in human beings things such as recognition, recollection, decision-making and many other facets of human minds without the prerequisite of consciousness. So it's nonsensical to think that you could get rid of consciousness and still have all those other things, having a philosophical zombie that behaved like a human. 

None of these things can occur in an unconscious human being:

  • imagination
  • abstract idea creation
  • appreciation
  • memory formation
  • moral thinking and moral behavior
  • instantaneous memory recall
  • instantaneous creation of permanent new memories
  • emotions
  • desire
  • speaking in a language
  • understanding spoken language
  • creativity
  • insight
  • beliefs
  • pleasure
  • pain
  • reading 
  • writing 
  • visual perception
  • recognition
  • planning 
  • auditory perception
  • attention
  • fascination and interest
  • the correct recall of large bodies of sequential information (such as when someone playing Hamlet recalls all his lines correctly)
  • spirituality
  • philosophical reasoning
  • volition

Since all of these things would be excluded from an unconscious human, it is nonsensical to say that an unconscious human could behave just like a conscious human. 

Now, it is quite possible that you might build a robot that could behave very much like a human. Such a robot would involve electronic functionality and transistor functionality and computer software functionality unlike anything in the human body. But that is not what Chalmers has imagined. He has claimed that there could exist something that could be physically identical to a human and behave just like a human without being conscious. There could not be any such thing. 

"Supervenient" is a philosophy jargon word meaning "coming or occurring as something additional, extraneous, or unexpected." The idea of the supervenience of mind is the idea that mind is additional to matter, or something unexpected from any arrangement of matter. We do not need any appeal to philosophical zombies to establish the idea that the human mind is something unexpected to exist from the matter in a brain. The person wanting to show that the brain fails to explain the mind can discuss the facets of human experience that do not correspond to any reality in the brain. These are very many, including these:

(1) The ability of human minds to instantly form permanent new memories (such as learning of the death of a family member), an ability that does not correspond to any reality in the brain (there being nothing in the brain having any resemblance to a component capable of instantly storing new sensory information, or storing learned information at any rate of storage). 

(2) The ability of human minds to instantly recall many detailed facts about a person after merely hearing their name or seeing an image of that person, an ability that cannot be explained by brains that lack any reading-of-brain-tissue mechanism and lack any of the things that humans put in devices they allow for instant retrieval (there being no sorting, indexing or addressing in the brain). 

(3) The ability of humans to preserve memories for 50 years or more, something unaccountable in a brain subject to very high molecular turnover and high structural turnover, with things such as synapses and dendritic spines not lasting for years, and brain proteins having an average lifetime of only a few weeks.  

(4) The ability of some humans to recall with perfect accuracy extremely long sequences such as all 6000+ verses of the Quran, an ability that cannot be explained by human brains which have no structure capable of explaining the retrieval of very long sequences of learned information. 

(5) The ability of many humans to have out-of-body experiences in which they view their bodies from a position outside of it, during cardiac arrest when the brain is electrically inactive, an ability that cannot be credibly explained under any theory that the brain is the same as the mind or the source of the mind. 

(6) The ability of humans to perform very high-above-chance on tests of telepathy, ESP and clairvoyance, something beyond any neural explanation. 

(7) The ability of some humans (with eyes closed) to perform with blazing speed and perfect accuracy extremely hard mathematical calculations, something that should be impossible in a very noisy brain with so many signal slowing factors, in which chemical synapses (by far the most common type) do not even reliably transmit nerve signals (with the transmission reliability being only 50% or less for each transmission across a synaptic gap). 

The study of these and quite a few other abilities (combined with a close study of the physical shortfalls of the brain) are enough to establish the supervenience of mind, its lack of equivalence to matter, and that human minds are unexpected from any arrangement of matter in a brain. There is no need to be appealing to "philosophical zombies" to try to establish such a thing, and such armchair reasoning appeals are not examples of sound reasoning. 

I may also note that claims about a logical possibility of philosophical zombies have a possibility of doing great moral harm. A man who believes in such a thing may cheerfully rob you and assault you on the street, while thinking to himself that he's not really sure he caused any harm, because maybe his victim was just a philosophical zombie. It is best to avoid such nonsense, and think in a sensible, commonsense way, by thinking that every human you see walking or talking is conscious. 

Tuesday, May 5, 2026

In 2016 Scientists Confessed Science Research Was Broken, and in 2026 Things Are No Better

 A 2016 story article in Vox.com was a remarkable confession of how bad things are in the world of scientific research. The article was entitled "The 7 biggest problems facing science, according to 270 scientists."  We read this:

"We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science....Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public."

The author writes about what is called publication bias, the tendency of science journals to prefer publishing studies that report finding some positive effect, rather than studies that fail to report such an effect, finding what is called a null result.  We read this: 

"Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase 'publish or perish'  hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side."

The statement won't be very clear to the average reader. What the author means by "studies that fail" is not studies that fail to follow good science practice or that fail to be completed, but instead studies that fail to report a positive result. An example would be a study testing for whether removing one particular gene from mice has an effect on their memory, and which reports no effect from such a removal. By "failed studies can mean career death" the author means studies that failed to report a positive effect and that then did not get published, because of publication bias in which positive results are preferred.  A scientist doing enough of such studies that were not published might end up with a low count of published papers. 

We read about conflicts of interest:

"Already, much of nutrition science, for instance, is funded by the food industry — an inherent conflict of interest. And the vast majority of drug clinical trials are funded by drugmakers. Studies have found that private industry–funded research tends to yield conclusions that are more favorable to the sponsors."

Such conflicts of interest taint neuroscience research, because a large fraction of neuroscience research is funded (directly or indirectly) by pharmaceutical companies and biotech device manufacturers hoping to produce some result they can claim is scientific evidence in favor of some pill or device they are selling.  We read in the article that some professors are spending up to 50% of their time writing research grant proposals. 

We get this quote:

"'As it stands, too much of the research funding is going to too few of the researchers,' writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. 'This creates a culture that rewards fast, sexy (and probably wrong) results.' ”

A culture that rewards wrong results? How messed up is that?

We read the following:

"The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more 'revolutionary.'  (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)

Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others....Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.

'I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,' writes Jess Kautz, a PhD student at the University of Arizona. 'And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work '.”

We read a quote from a Joseph Hilgard who says, "The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”  We read the claim that 85 percent of research is "routinely wasted on poorly designed and redundant studies."  We read  the claim that up to 30 percent of research turns out to be wrong or consist of exaggerated results. 

We read about how badly published results fail to be replicated. We have a big boldface section header saying this:

"Replicating results is crucial. But scientists rarely do it."

We get this example

"The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all."

We have this statement about misleading science journalism and misleading university press releases:

"Science journalism is often full of exaggerated, conflicting, or outright misleading claims...Sometimes bad stories are peddled by university press shops....Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice....The 'toxic dynamic' of journalists, academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step."

The long 2016 Vox article mentions some ways that this sorry state of science research could be improved.  But in the ten years since the article was published, there has been no improvement in the sorry state of science research. All the problems discussed in the 2016 article are still there, and still exist as badly as they existed in 2016.  There is no evidence that research scientists and science journalists are improving their dysfunctional and defective methods.  The many severe problems mentioned are only part of the problems that exist.  Many other severe problems in science research and science journalism are not mentioned in the Vox article, such as these:

(1) The tendency of scientific researchers to try to do research that supports prevailing dogmas of scientists, which are often groundless dogmas or poorly supported dogmas, rather than to do objective research that takes a "follow the evidence wherever it leads" approach. 

(2) The strong economic motivations that underlie misleading clickbait headlines, motivations such as the desire to produce page views that are profitable because of revenue-generating ads on such pages. 

(3) The use of way-too-small study group sizes in fields such as neuroscience, resulting mostly in unreliable "false alarm" results. 

(4) The use of poor methods of measurement in fields such as neuroscience, such as the widespread use of unreliable judgments of "freezing behavior" to judge fear or recall in rodents, rather than other more reliable methods. 

(5) A failure to follow a detailed blinding protocol.

(6) The extensive use of "keep torturing the data until it confesses" tactics, in which scientists fail to commit themselves to one straightforward method of gathering data and analyzing data, and instead act as if they had a license to endlessly play around with data, subjecting data to the most bizarre and convoluted arbitrary analysis pathways, that end up distorting and contorting the data gathered. 

smoke and mirrors neuroscience
What goes on in today's experimental neuroscience

When such problems exist in abundance, neuroscientists are largely engaging in a sham and a scam when they take federal money and pretend to be engaging in rigorous experimental science. 

junk neuroscience

Research science and science journalism are broke, and there is no sign that they are slowly mending themselves. 

Questionable Research Practices

A recent article on the Retraction Watch site is captured in the screen shot below. Notice the graph showing that the growth of fake or shoddy "paper mill" papers is stronger than the growth of regular scientific papers. 

paper mill fraud

So-called "paper mills" are for-profit companies that offer "editorial services" to scientists, which can range from writing much of a paper to writing an entire paper. The outputs of paper mills are very often fake papers and programmatically generated papers in which everything is written by computer programs, programs that steal most of their text from other papers. Such paper mills make heavy use of AI programs such as ChatGPT, which often give wrong answers or low-quality text sometimes called "AI slop." Someone can ask a program such as ChatGPT to generate a hypothetical paper sounding like the typical paper published in some line of research. The same person might then try to get the AI-generated paper published as a real paper. Or someone might ask a system like ChatGPT to write a summary of research on some narrow topic.  That person might then try to get the paper published as a "review article" or "systematic review" or "meta-analysis." Many scientists unwilling to get involved with shady paper mills are using AI systems such as ChatGPT to write much of the text of their papers. The integrity and credibility of the paper may be compromised by the use of AI-generated text not written by any mind understanding the topic discussed. 


As Exhibit A to back up the claim that the state of scientific research may be worsening, I may offer a recent article by Ross Andersen in The Atlantic, one entitled, "Science Is Drowning in AI Slop." We read this, referring to the "large-language models" used by so-called artificial intelligence or AI:

"Almost immediately after large language models went mainstream, manuscripts started pouring into [scientific] journal inboxes in unprecedented numbers. Some portion of this effect can be chalked up to AI’s ability to juice productivity, especially among non-English-speaking scientists who need help presenting their research. But ChatGPT and its ilk are also being used to give fraudulent or shoddy work a new veneer of plausibility, according to Mandy Hill, the managing director of academic publishing at Cambridge University Press & Assessment. That makes the task of sorting wheat from chaff much more time-consuming for editors and referees, and also more technically difficult."

Because of the AI slop problem, the state of science research may be even worse today than the very bad state described in the 2016 Vox article. 

Saturday, May 2, 2026

"Behavioral Timescale Synaptic Plasticity" Is Not Any Well-Established Natural Reality

 Quanta Magazine is a widely-read online magazine with slick graphics. On topics of science the magazine again and again is guilty of the most glaring failures. Quanta Magazine often has assigned its online articles about great biology mysteries (involving riddles a thousand miles over the heads of PhDs) to writers identified as "writing interns."  The articles at Quanta Magazine often contain misleading prose, groundless boasts or glaring falsehoods. I discuss some examples of such poor journalism in my posts here and here and here and here.

The latest example of false news in Quanta Magazine is an article with the bogus headline "A New Type of Neuroplasticity Rewires the Brain After a Single Experience." The claim is BS,  pure baloney. Anyone familiar with the structure of the brain should instantly realize what nonsense this headline is. Neurons have fixed positions in the brain. Synapses are like roots in a dense forest, roots that lock trees into fixed positions in the forest (but instead of locking trees into their positions in a forest, synapses help lock neurons into their positions in the brain). Synapses are things almost as slow-changing as the roots of trees in a forest. So physically the idea that a brain could be instantly rewired is nonsense. 

The article starts out with this untrue claim: "Every experience we have changes our brain, the way a ceramicist reshapes a slab of clay." To the contrary, an experience does not change a brain. The analogy that the brain is like a lump of clay onto which impressions are written by experiences (like letters being written by the earliest cuneiform writers in Mesopotamia) is an extremely misleading analogy with no evidence to support it.  The brain has nothing like a stylus that could write such impressions. And no trace of any such impressions can be found. Microscopic examination of brain tissue (which has been done very abundantly) has never produced the slightest trace of anything  anyone learned. 

misleading brain analogy

We have this vacuous attempt to explain memory, not corresponding to any physical reality in the brain: "This plasticity, the quality of being easily reshaped, makes the brain really good at learning — a quintessential process that allows us to remember the plotline of a novel, navigate a new city, pick up a new language, and avoid touching a hot stove." It is not correct that brains are "easily reshaped," and it is not correct to suggest that brain structure changes after learning. Scan a brain before and after 8 hours of school learning, and you will see no difference. 

The writer then tells us a myth with no basis in fact, stating this:

"Recently, neuroscientists described a new form of neuroplasticity that might be helping the brain learn across a timescale of several seconds — long enough to capture the behavioral process of learning from a single experience. In two recent reviews, published in The Journal of Neuroscience (opens a new tab)

 and Nature Neuroscience(opens a new tab), they describe 'behavioral timescale synaptic plasticity,' or BTSP. This type of learning in the hippocampus, the brain’s memory hub, is caused by an electrical change that affects multiple neurons at once and unfolds across several seconds."

The claim that something called  "behavioral timescale synaptic plasticity," or BTSP is a "type of learning" is a claim without any basis in fact. Before looking at one of the reviews cited in the quote above (the one that is not behind a paywall), I must give some prefatory description of the social construction of discovery legends in today's neuroscience. 

Research in cognitive neuroscience research is dominated by low-quality studies. The study here concludes, "Our results indicate that the median statistical power in neuroscience is 21%." This is an abysmal number, an appalling figure. It has long been said that in experimental research, the goal should be a statistical power of 80%, which roughly corresponds to a likelihood of 80% that the result will be replicated.  A study with a statistical power of 21% is a low quality study that is likely to be announcing a false alarm. When a research field has a median statistical power of 21%, that means half of the studies have a statistical power of 21% or less.  If such an estimation is correct, it means the great majority of neuroscience studies report results that are unreliable or untrue. 

A neuroscientist wishing to gain fame and funding may do some low-quality research, and claim his research is a discovery of some new  effect for the first time. The neuroscientist may coin some name for this alleged effect, perhaps using some acronym. Whether that name is forgotten and never repeated may depend on whether other neuroscientists are willing to repeat the observational claim, and whether other scientists are willing to try to replicate the effect.  If a claim of a discovery "presses the buttons" of neuroscientists by claiming something neuroscientists are eager to see, the original observer's discovery claims may be repeated by other scientists. But it will often be the case that there is no sound warrant for either the original observational claim nor similar claims by other scientists. An original low-quality paper reporting some effect may use poor experimental methods, and equally poor experimental methods may be used by others who claim to see the same effect. 

Consequently we should never "take it for granted" that something is true, just because some scientific paper says that scientist X claimed to see such a thing, and that also scientist Y and scientist Z claimed to see it. The social construction of groundless triumphal legends is extremely common in neuroscience literature. The standards for getting a neuroscience paper published are low. Junk research is published every week, and low-quality experimental papers are published every month. So you must always go back to the papers being cited, and look at them critically, and ask: was their ever any decent evidence observed here?

Let's do that with one of the two papers cited by the Quanta Magazine article above, the one not behind a paywall. The paper is an extremely misleading review article entitled "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" which you can read here. The paper is short on descriptions of reliable observations of things naturally occurring and very long on triumphal narrations, most of which are groundless boasts hailing supposedly marvelous accomplishments of the most poorly designed and low-quality scientific studies. The paper's chief triumphal narration is the claim that something called behavioral timescale synaptic plasticity (BTSD) was discovered in 2017 by Katie C. Bittner and others. 

We hear these claims by the "burst in the field" paper about this BTSD:
  • "It is triggered by occasional dendritic plateau potentials associated with a burst of firing in the soma.Neurons fire unpredictably at a rate between about 1 time per second and 100 times per second, with their firing rates varying unpredictably. So anyone analyzing noisy, variable data on neuron firings might be able to find "occasional dendritic plateau potentials associated with a burst of firing in the soma," even if there is no such thing as BTSP. Similarly, anyone analyzing cloud formations sufficiently will be able to find occasional cloud clumps with this or that shape. 
  • "BTSP operates on the timescale of seconds rather than milliseconds and can therefore support associative learning over temporal delays relevant to behavior." If this alleged BTSP occurs quickly, that is no reason at all for thinking it has anything to do with any type of learning. 
  • "It leads to large changes in synaptic strength, enabling fast remodeling of neuronal representations that may support one-shot learning." There is no robust evidence for representations in either neurons or synapses.  The size and strength of synapses vary randomly over days and weeks. An increase in the strength of some synapses is never evidence that anything is being represented. And you cannot actually have "large changes in synaptic strength" being produced by anything operating on a timescale of seconds. There is no robust evidence that "large changes in synaptic strength" ever naturally occur on a timescale of seconds. 
The paper claims this: "A different kind of plasticity called behavioral timescale synaptic plasticity (BTSP) has recently been uncovered in area CA1 of the hippocampus (Bittner et al., 2017; Milstein et al., 2021) and has properties that appear to solve many of the aforementioned limitations of Hebbian plasticity (Magee and Grienberger, 2020)."  Bittner's 2017 paper supposedly first observing this BTSP alleged effect is behind a paywall. But we can look at this  2021 paper by Milstein, co-authored by Bittner.   It is a very low-quality paper that you can read here, one with the misleading title "Bidirectional synaptic plasticity rapidly
modifies hippocampal representations." There is no robust evidence for the representations claimed.  

This 2021 paper co-authored by Milstein and Bittner starts out by reciting many an unfounded legend and dubious dogma of neuroscientists. Then we have in Figure 1 some actual fresh observational data. The data is nothing remotely resembling compelling observational data. We have data from a single mouse that was on a treadmill. The graphs are not really hard data, because we have references to "place fields" that are social constructs of neuroscientists. Here the paper makes some claimed observational result that no one should take seriously or pay attention to unless it involved results with at least 15 or 20 animals per study group. But the study group consists of a measly one animal. The study group size is not even 10% of what it should be for reliable evidence to be claimed. 

Figure 2 is just as laughable as evidence. Since it has a caption of "mouse running," it seemingly also is data from a single mouse. Figure 3 fails to mention any study group size larger than 1.  We have a graph mentioning  "synaptic weights," one seeming to show some kind of increase. But the graph has no scale. No actual measurement of synaptic weights is occurring. Why of course -- synapses are things too tiny to be weighed with any accuracy. 

Later in the paper we have an indication that no reliable measurement of synaptic weights was occurring. We read, "We modeled changes in synaptic weights as a function of the time-varying amplitudes of these two biochemical intermediate signals, ET and IS." So apparently something else easier to measure was being measured, and the authors were engaging in the very dubious business of claiming that this other thing was some indication of synaptic weights.  That sounds rather like someone trying to deduce the weight of someone's meals by how much they spent on groceries this week -- not a reliable way of doing things. 

Nothing reliable is being done here to show that this claimed "Behavioral Timescale Synaptic Plasticity" naturally exists, or that it can produce rapid changes in synaptic strength. And even if you were to show such a thing, that would do nothing to explain instant learning, since changes in synaptic strength are not credible explanations of how newly learned information could be stored. For further evidence of the low quality of Wilstein and Bittner's 2021 paper "Bidirectional synaptic plasticity rapidly modifies hippocampal representations,"  we need merely search for whether a sample size calculation was done. The paper confesses, "Sample sizes were not determined by statistical methods."  Why of course. Since laughable, ridiculous sample sizes such as only one mouse were used (rather than decent study group sizes such as 15 or 20 mice per study group), the authors did not do a sample size calculation, which would have revealed some ridiculously low statistical power way, way below 25%. 

Questionable research practices in cognitive neuroscience

By citing Wilstein and Bittner's 2021 very low-quality paper "Bidirectional synaptic plasticity rapidly modifies hippocampal representations," the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" has given us another example of what constantly goes on in the dysfunctional world of neuroscience research:  paper authors citing very low-quality research as evidence of some effect they are arguing for, with the authors seeming to apply no critical scrutiny before citing a paper.  


The Quanta Magazine article and some of the papers cited by the papers (mentioned above) refer to a 2015 paper co-authored by Bittner and Jeffrey C. Magee, entitled "Conjunctive input processing drives feature selectivity in hippocampal CA1 neurons." While the study makes reference to a "normative" pool of 21 mice, the study's claims of detecting something are based on a way-too-small study group size of only 6 mice. It's another very low-quality paper failing to provide any decent evidence of "Behavioral Timescale Synaptic Plasticity." The authors confess that "no statistical methods were used to predetermine sample sizes," which is always a damning confession in a scientific paper of this type, kind of a "we were too lazy to act like good scientists" confession. But (paying no attention to quality factors) the Quanta Magazine senselessly treats the study as if was something important, and has a big photo of Magee. This is typical for Quanta Magazine, which seems to never pay any attention to whether neuroscience  studies are meeting the hallmarks of robust, well-designed science. 

The paper "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" has graphs from the year 2025 paper "Synaptic plasticity rules driving representational shifting in the hippocampus" that you can read here. That paper mainly refers vaguely to "mice" without mentioning exact study group sizes. But occasionally the paper does mention the exact study group sizes, which were way-too-small study group sizes such as only 4 mice, only 7 mice and only 9 mice. We read, "CA1 recordings were done in 4 mice, CA3 recordings in 7 mice and optogenetic experiments in 9 mice." These study group sizes were way-too-small for the paper to be taken as serious evidence of anything. No paper like this should be taken seriously unless 15 or 20 animals per study group were used. We have the damning confession in the paper that "No statistical method was used to predetermine sample size." If the paper authors had acted like good scientists by doing such a calculation, they would have found out how inadequate were the study group sizes they used. We also read, "Investigators were not blinded to CA1 or CA3 groups." This is a crucial defect for a paper like this. We have here a very low-quality example of a Questionable Research Practices study, one that fails to provide any good evidence for "Behavioral Timescale Synaptic Plasticity." 

So it seems the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" (which you can read here) is a paper that fails to cite any convincing studies showing any such phenomenon as "Behavioral Timescale Synaptic Plasticity." I reach this conclusion not based on a readership of all papers cited by that paper, but by looking at the studies discussed above, which were all very low-quality papers very badly guilty of Questionable Research Practices. The paper provides no robust evidence that scientists have demonstrated any such thing as any natural ability by which synapses could be instantly or very quickly strengthened. 

The review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" is a mere review article, not a systematic review. In scientific literature, systematic review articles are articles following a clear methodology in regard to which papers are to be cited as evidence, a quality filter clearly defined in the paper. A mere review article involves citing any papers that the authors wish to cite, without the papers being subjected to a quality filter stated in the paper. In today's neuroscience literature there is a plethora of misleading review articles citing poor-quality papers. 

review article versus systematic review

We get an indication in the review article "Behavioral Timescale Synaptic Plasticity: A Burst in the Field of Learning and Memory" that what is typically going on in the results it reports are not natural occurrences, but instead artificial occurrences produced by scientists doing special fiddling. In the article we read this: "The main approach, introduced by the Magee lab (Bittner et al., 2017), is to artificially induce BTSP with a long-lasting high–amplitude depolarization of the soma (typically a 300 ms 600 pA current injection) triggering a somatic CS (which is assumed to reflect a dendritic plateau), preceded or followed by synaptic activity within a few seconds time window (Fig. 3B)." 

So mainly what is being reported are artificial results produced by current injections -- experimenters zapping brains with electricity or electrical currents. The paper then says that this can be either done "in vitro" (that  is, using tissue detached from an organism's body) or "in vivo" (observing something inside a living organism). But we are told that the "in vivo" observations require artificial experimenter manipulations such as "current injections" or "optogenetic stimulations." Both are artificial types of brain zapping. Observations requiring such energy injections by experimenters  are not evidence of something naturally occurring in the brain. 

Bungling Neuroscientist

I propose the term "electromisrepresentation" to describe misleading narratives of this type. We can define electromisrepresentation as the artificial production of brain effects by methods such as electrical stimulation, combined with a misleading narrative trying to suggest that the resulting effects can explain natural human capabilities. Electromisrepresentation has massively occurred in discussion of so-called long-term potentiation or LTP. 

The Quanta Magazine article based on this scientific paper is also very misleading bunk, an article that attempts to persuade us of the existence of something for which there is no robust experimental evidence. A very bad example of groundless narration, the paper is full of untrue statements claiming magnificent accomplishments from scientists who actually ran very low-quality studies deserving mainly scorn because of multiple methodological sins such as the use of way-too-small study group sizes, the lack of a blinding protocol,  the lack of pre-registration, and an abundance of unreliable claims about the physical state of things (synapses) too small to have their physical state reliably measured. Occasionally the Quanta Magazine article has an indication of what baloney it is shoveling, such as when it says, "There’s still much unknown about BTSP, especially the mechanism, which Madar said is 'quite speculative.' ”

But that's par for the course in the untrustworthy world of today's neuroscience research, where neuroscientists these days boast like crazy about doing all kinds of wonderful things that were not actually done, because decent scientific procedures were never followed, and the experiments were so poorly designed and guilty of so many defects. 

There are two very strong reasons for rejecting all claims that there is good evidence that there are ever any quick natural increases in synaptic strength:
(1) The intrinsic unreliability of all attempts to measure the strength of synapses, given the incredibly small size of synapses, which makes all attempts to measure their strength dubious and unreliable. The largest parts of synapses (their clefts) are about 500 to 1000 times smaller than the largest part of a neuron (its soma or main body). 
(2) The intrinsic implausibility of any claims that synapses could naturally be quickly strengthened, given the fact that any synapse strengthening would require new protein synthesis, a process that takes minutes or hours. 

Scientists have never produced any credible tale to explain how either instant learning or learning of any type could occur in a brain, which has no components having any resemblance to a system for storing learned information. You would never show information storage or a storage of learned data or experiences by merely showing an increase in the strength of something.  No well-designed and robust scientific studies have ever produced any compelling evidence that learned information has been physically stored in brains. Claims about LTP arose from the type of artificial brain tissue zapping described above, with researchers ignoring that what was occurring did not correspond to natural events in the brain. Microscopic examination of brain tissue has never yielded the slightest trace of anything anyone learned or experienced -- not a single sentence, not a single word, not even a single character or letter or number or even a single pixel of anything anyone saw. 

Scientists have reliably determined that synapses are built of proteins that have average lifespans of only a few weeks, roughly a thousandth (1/1000, that is .001) of the maximum amount of time that humans can remember things, which is about sixty years. Besides utterly failing to explain how a brain could do memory storage, and how memories could persist for decades, scientists have utterly failed to explain how a brain could do instant memory retrieval, or memory retrieval of any type. We know the types of things that allow for instant retrieval of stored information: things such as addresses, indexing and sorting. No such things exist in the brain. The world of neuroscientist claims about memory is a world of fantasy and pareidolia, in which neuroscientists eagerly hoping to see things claim to see the faintest evidence of such things, like some wanderer in the desert eagerly scanning the far horizon five miles away and claiming to see water on the far horizon (although only a mirage is there, or only "see what you yearn to see" pareidolia is occurring). 

For a discussion of the very many ways that scientists have of conjuring up claims of things that don't exist, see my post here entitled  "Scientists Have a Hundred Ways To Conjure Up Phantasms That Don't Exist," and my post here entitled "The Social Construction of Eager Community Mirages."