Saturday, April 12, 2025

When PR for Junk Neuroscience Studies Is Passed Off as "Medical News"

The site MedicalXPress.com is a site that positions itself as a medical news site. The site seems to be one of those sites (such as ScienceDaily.com or Eurekalert.com) that mainly just uncritically publishes the latest university and college press releases, and positions its latest collection of these as either "science news" or "medical news." What's wrong with that? Well, for one thing,  university and college press releases these days are  notorious for their hype, exaggerations, misstatements and frequent untrue headlines, when they are announcing newly published research done at their institution.  So if you run a site that mostly publishes unedited the latest latest research press releases of universities and colleges, you are aiding and abetting the proliferation of false and misleading information. 

Another thing wrong with what goes on at MedicalXPress.com is that the site is guilty of frequently trying to pass off very low-quality neuroscience research as "medical news."  The type of junk rodent research that neuroscientists typically do has no relevance to human health, and it is often very misleading to be selling such shoddy work as medical news, particularly since the work did not involve any attempt to develop a medicine or treatment for humans. 

Let's look at an example of some of the news items that have appeared at the site, involving work utterly unworthy of the attention of people interested in keeping up with medical news. A recent article at the site had the untrue headline "Study shows that dendritic plasticity contributes to the integration of memories."  It was an article attempting to persuade us that microscopic structures in brains called dendritic spines have something to do with memory. There is no good evidence that this is true. 

By the third paragraph of the article, we got some baloney shoveling. Scientist Alcino Silva made grandiose boasts about a very low quality study he co-authored, stating, "A few years back, in a landmark study published in Nature in 2016, we demonstrated that memories formed a few hours apart are linked because they are stored in a common set of neurons in the hippocampus." He was referring to his very low-quality junk science study "A shared neural ensemble links distinct contextual memories encoded close in time," which you can read here.  It was a rodent study using way-too-small study group sizes such as only 4 or 7 or 8 mice. No study of this type should be taken seriously unless it uses at least 15 or 20 subjects per study group. The study also hinged upon the utterly unreliable technique of trying to judge recall in rodents by making subjective judgments of so-called "freezing behavior."  All neuroscience studies that use that utterly unreliable technique are examples of junk science, for reasons I explain here.  The fact that this study has been cited over 1000 times shows the dismally dysfunctional state of neuroscience, in which researchers routinely cite very low-quality junk science studies. 

We then have an equally grandiose and equally groundless boast by another neuroscientist. We read, " 'We showed that when mice form two memories close in time, we can see that many of the same somas, dendritic branches, and spines are involved in forming these two memories,' explained Megha Sehgal, the first author and a co-corresponding author of the paper." Nothing of the sort was done. Dendritic spines are constantly forming and disappearing throughout the brains of every mammal. On any day in the brain of any mammal, there are many millions of dendritic spines appearing; there are many millions of dendritic spines disappearing; there are many millions of dendritic spines enlarging; and there are many millions of dendritic spines shrinking. An observation of something happening to dendritic spines is never evidence that those dendritic spines had anything to do with the formation of a memory. 

Later, getting very excited about her grandiose but groundless claims, Seghal states this: "When we forced independent memories to be stored in the same neuronal somas or even the dendrites and found just this simple intervention in one brain region, the retrosplenial cortex, was enough to link these memories!" The claims are groundless. Scientists have no evidence that memories are stored in any place in a brain, and any claim by a scientist that he or she forced a memory to be in some particular place is a bogus boast. 

The groundless boasts being made are based on research reported in the very low-quality paper here, a paper entitled "Compartmentalized dendritic plasticity in the mouse retrosplenial cortex links contextual memories formed close in time." It's the usual type of very low-quality work we see from rodent memory researchers.  The paper uses way-too-small study group sizes such as only 4 mice per group or only 9 mice group or only 12 mice per group. No rodent research paper of this type using fewer than 20 subjects per study group should be taken seriously, unless it mentions that did a sample size calculation showing that fewer than 20 subjects was sufficient to achieve a good statistical power such as 80% (which this paper does not).  Again, we have a paper that hinges upon attempts to measure rodent recall by using the utterly unreliable technique of trying to judge  "freezing behavior." All neuroscience studies that use that utterly unreliable technique are examples of junk science, for reasons I explain here.  

Among the very many reasons why attempting to judge "freezing behavior" is a marker of junk neuroscience is the fact that there is no standard approach as to how such judgments occur, in regard to the interval of time used. So a researcher can try to judge a mouse's immobility for three minutes, and if he gets a claimed "freezing percentage" he likes, he can use that; but if the percentage is not what he likes, he can use only the first two minutes; and if that percentage is not what he likes, he can use only the first minute; and if that percentage is not what he likes, he can use only the first 30 seconds. That is a "see whatever you want to see" kind of deal, rather than trustworthy measurement. To help show that is not going on, a researcher must always list the time interval used for each and every one of his "freezing behavior" graphs, to show that at least the same time interval was used each time. In the case of the the very low quality paper I am discussing here, the paper "Compartmentalized dendritic plasticity in the mouse retrosplenial cortex links contextual memories formed close in time," we see many "freezing behavior" graphs, none of which mention the time interval corresponding to the graph.  This is science at its clumsiest. Were these claimed degrees of freezing behavior occurring over 30 seconds, 60 seconds, 90 seconds, 120 seconds, or 180 seconds? We are not told, and we cannot tell whether the time interval is different for each graph. So we cannot even tell whether there is a match between the technique used and experimental conventions of neuroscientist rodent researchers. 

Because no reliable technique was used to measure fear or recall, we should disbelieve the claim made in the MedicalXPress article that "Silva, Sehgal and their colleagues found that, following their experimental intervention, mice became scared of a box that was previously unimportant to them, simply because the memory of this box was stored in the same dendrites that stored memories of a box in which they experienced an electric shock."  Such researchers could  have used a reliable technique for measuring fear in mice (heart rate measurement), a technique very reliable because heart rate very dramatically spikes when mice are afraid. Like typical rodent memory researchers, the authors chose the unreliable technique of trying to judge "freezing behavior" rather than the reliable method of looking for heart rate spikes in mice. 

We have the usual lame-as-lame-can-be confession in this paper that the authors failed to do a sample size calculation, as good experimental scientists should do. We read, "No statistical methods were used to predetermine sample sizes but our sample sizes are similar to those reported in previous publications." It is a great scandal that neuroscientists routinely use way-too-small group sizes, creating papers without any decent statistical power, papers mostly reporting only false alarms. Trying to excuse yourself by pointing out that other researchers are using the same way-too-small study group sizes is as lame and laughable as saying, "I don't pay my taxes, but lots of my friends also don't pay their taxes." 

neuroscientist confession

The junk science paper "Compartmentalized dendritic plasticity in the mouse retrosplenial cortex links contextual memories formed close in time" is another joke of a neuroscience research paper in which the study group sizes are smaller than the number of authors, which should make us laugh hard and say: "What was the rule here: only one mouse per researcher?"

inadequate sample sizes in neuroscience

You can do a Google image search for "Effect size versus sample size" to get an idea about the relation between the two, and another thing called statistical power. Some of the visuals you will see may be hard to understand. The visual below describes the situation in an easy to understand way. 

neuroscience sample sizes

The smaller the effect size, the larger the study group size needed to show something in a convincing manner (such as a statistical power of 80%).  Most effect sizes in neuroscience are small.  That means in the great majority of cases a study group of 25 or more is needed. No experimental neuroscience should be taken seriously if it uses a smallest study group size smaller than 15 subjects. It is conceivable that a neuroscience experimental study using 15 or 20 subjects might provide modest evidence for something, but only in the very unlikely case of a high effect size. For the much more likely case of an effect size that is only medium or low, then at least about 35 subjects per study group are needed. 

As a general rule, we should be dismissing as junk science any experimental neuroscience study that fails to either use at least 15 subjects per study group or fails to do a sample size calculation to determine whether the number of subjects was adequate to achieve a good statistical power such as 80%. If the smallest study group size is between 15 and 30, we should only regard the paper as being possibly modest evidence for something if the authors did a sample size calculation to show that with the effect size of the type they are dealing with, the study group size they used was adequate to achieve a good statistical power. 

Strangely in this paper with 17 listed authors we have a reference to "The investigator who collected and analyzed the data including behavior, imaging and staining." So there were 17 people listed as paper authors, but only one person "who collected and analyzed the data"? Who was that person? Was it a PhD, or merely a graduate student? Was it someone who had any experience in doing this kind of extremely tricky easy-to-get-wrong work, involving lots of high-tech equipment easy to misuse, and lots of "freezing behavior" estimations so easy to get wrong? Or was it some graduate student fumbling around while doing such work for the first time? We'll never now, because this "investigator who collected and analyzed the data including behavior, imaging and staining" has not been named. The failure of papers such as this to list the specific people who observed things and when they observed the things is another huge reason for distrusting such papers. And why were 17 people listed as authors, when there was only a single "investigator who collected and analyzed the data"?  Elsewhere a scientist tells us, "Anytime you critique a paper in my field, you might think you’re critiquing the senior scientists on the paper, but they usually have a graduate student or a postdoc who wrote the thing." 

What goes on nowadays in science literature is that scientists massively list themselves as authors of papers they did not write, involving research they had no substantial involvement in.  Then scientists make their accomplishments sound ten times greater than they are, by making claims such as "I am the author of 100 scientific papers," when they were merely the co-author of such papers, with the papers typically having a dozen or more listed authors. 

We should distrust or not believe the claim in the paper that "The investigator who collected and analyzed the data including behavior, imaging and staining was blinded to the mouse genotypes and treatment conditions." Blinding is an important feature of well-designed experiments, as it helps to reduce the chance of "see whatever you want to see" kind of bias. Effective blinding requires a well-designed blinding protocol that takes at least a long paragraph to state. Whenever you read a mere one-sentence assertion that some blinding occurred, without any detailed discussion of an effective blinding protocol, the claim that blinding occurred should not be trusted. An effective blinding protocol in a neuroscience experiment typically requires multiple people involved in collecting and analyzing data. For example, one person might perform some intervention (such as injecting something) into one group of mice (not a group of control mice); some other person not knowing which mice got the shot might perform some performance test on both the mice that got the shot and the control mice; and some third person not knowing which mice got the shot might analyze the performance data. But when you have a single "investigator who collected and analyzed the data," there's no way for that person to be blind about which rodents are in the control group -- unless some very complicated and ingenious scheme was followed to assure effective blinding. If there had been so clever a scheme, we may assume that whoever wrote up the paper would want to tell us about such ingenuity. Since no details about a blinding protocol have been given, other than the bare claim that blinding occurred, we should distrust or disbelieve the claim that this single investigator "was blinded to the mouse genotypes and treatment conditions." 

The insinuations in the junk science papers mentioned above (that dendritic spines help store memories) makes no sense. Dendritic spines no more resemble a place of written information than the twigs on trees. And dendritic spines are too unstable to explain memories that can last for decades.  

dendritic spine

 The 2015 paper "Impermanence of dendritic spines in live adult CA1 hippocampus" states the following, describing a 100% turnover of dendritic spines within six weeks:

"Mathematical modeling revealed that the data best matched kinetic models with a single population of spines of mean lifetime ~1–2 weeks. This implies ~100% turnover in ~2–3 times this interval, a near full erasure of the synaptic connectivity pattern."

The paper here states, "It has been shown that in the hippocampus in vivo, within a month the rate of spine turnover approaches 100% (Attardo et al., 2015; Pfeiffer et al., 2018)." The 2020 paper here states, "Only a tiny fraction of new spines (0.04% of total spines) survive the first few weeks in synaptic circuits and are stably maintained later in life."  The author here is telling us that only 1 in 2500 dendritic spines survive more than a few weeks.  Given such an assertion, we should be very skeptical about the author's insinuation that some very tiny fraction of such spines "are stably maintained." No one has ever observed a dendritic spine lasting for years, and the observations that have been made of dendritic spines give us every reason to assume that dendritic spines do not ever last for more than a few years. Conversely, human knowledge and human motor skills can last for 50 years or more, way too long a time to be explained by changes in dendritic spines or synapses, both of which change too much and too frequently to be a stable storage place for human memories. 

The failure of neuroscientists to listen to what dendritic spines are telling us is epitomized by a 2015 review article on dendritic spines, which states, "It is also known that thick spines may persist for a months [sic], while thin spines are very transient, which indicate that perhaps thick spines are more responsible for development and maintenance of long-term memory."  It is as if the writers had forgotten the fact that humans can remember very well  memories that last for 50 years, a length of time a hundred times longer than "months." 

2019 paper documents a 16-day examination of synapses, finding "the dataset contained n = 320 stable synapses, n = 163 eliminated synapses and n = 134 formed synapses."  That's about a 33% disappearance rate over a course of 16 days. The same paper refers to another paper that "reported rates of [dendritic] spine eliminations in the order of 40% over an observation period of 4 days." A paper studying the lifetimes of dendritic spines in the cortex states, "Under our experimental conditions, most spines that appear survive for at most a few days. Spines that appear and persist are rare."

The 2023 paper here gives the graph below showing the decay rate of the volume of dendritic spines. It is obvious from the graph that they do not last for years, and mostly do not even last for six months. 


Page 278 of the same paper says, "Two-photon imaging in the Gan and Svoboda labs revealed that spines can be stable over extended periods of time in vivo but also display genesis (generation) and elimination (pruning) at a frequency of 1–4% per week." Something vanishing at a rate of 2% per week will be gone within a year. Discussing the motor cortex, the paper here says, "We found that 3.5% ± 1.2% of spines were eliminated and 4.3% ± 1.3% were formed in motor cortex over 2 weeks (Figures 3J, 3K, and 3O; 224 spines, 2 animals)." An elimination rate of 3.5% over two weeks would result in 90% elimination over a length of one year. 

The 2022 paper "Stability and dynamics of dendritic spines in macaque prefrontal cortex" studied  how long  dendritic spines last in a type of monkey. It says, "We found that newly formed spines were more susceptible to elimination, with only 40% persisting over a period of months." The same study found that "the percentage of elimination for pre-existing spines over 7 days was only 6% on average," which is a rate that would cause complete disappearance of pre-existing dendritic spines within a year. Dealing with a type of monkey, the 2015 paper "In Vivo Two-Photon Imaging of Dendritic Spines in Marmoset Neocortex" tells us that "The loss or gain rate at the 1 d  [one day] interval observed in this study was similar to those in previous studies of layer 5 neurons of the somatosensory cortex of transgenic mice (12% in 3 d [ 3 days] for both loss and gain; Kim
and Nabekura, 2011) and layer 2/3 neurons of ferret V1 by the virus vector method (4% in 1 d [ 1 day] for both loss and gain; Yu et al., 2011)."  The reported loss of dendritic spines is a rate that would cause 100% loss within a year. 

Most synapses are attached to dendritic spines, so all of these findings about the instability and short lifetimes of dendritic spines are also findings about the instability and short lifetimes of synapses. Both synapses and dendritic spines are way, way too unstable to be a credible storage place for human memories that can last for 50+ years. There is no place in the brain that can be reasonably postulated as a storage place allowing memories to persist for 50 years. 

Currently as the main story on MedicalXPress.com is a story groundlessly claiming that researchers found that groups of neurons encode different types of pain. It's another promotion of a junk science paper, because the study group sizes are so small, consisting of study groups such as 6 mice, 5 mice and 4 mice. Neuroscientists these days are guilty of very often making misleading uses of the terms "encode" and "representation," by claiming to have found "encoding" or "representation" when no robust evidence of any such thing was found. 

No comments:

Post a Comment