Academia press releases these days are very often guilty of bogus boasts; and press releases announcing neuroscience research are some of the worst examples of misleading PR. Looking at "science news" headlines today, I find a prime example. It is a press release from Virginia Tech, which appears on the frequently-misinforming Medical XPress site with the headline "Scientists find ways to boost memory in aging brains."
Before reading the text below the headline I thought to myself that the research was going to be some more low-quality rodent research that sadly dominates neuroscience research efforts these days. I was right; that's just what it is.
After making unfounded statements speaking as if scientists knew things they do not actually know, the press release claims "the researchers were able to improve memory in older rats," mentioning the study here. It's a study called "Age-related dysregulation of proteasome-independent K63 polyubiquitination in the hippocampus and amygdala" behind a paywall, and no preprint is available. But without spending 36 dollars to read the study, there's a way to tell that it is low-quality unreliable research work. When viewed with a PC (but not a tablet), the page with the abstract has an image of the paper's visuals. Although it is a low-resolution image, I can still see enough to tell that very small study group sizes were used. There are bar graphs that look like the one below:
In a graph like this, the total number of * symbols behind each bar indicates the study group size, with each * symbol indicating the data from one animal subject. Now, by squinting at the graphs you can see
here, and by looking at the number of * symbols behind the bars, you can tell that the study group sizes used in the "Age-related dysregulation of proteasome-independent K63 polyubiquitination in the hippocampus and amygdala" study were way too small, only about 10 or smaller. No study like this can be claimed as reliable evidence for anything unless it uses study groups that are each at least 15 or 20 subjects per study group. And for almost all studies of this type, a study group size much larger than 15 would be needed.
A similar 2011
paper by the corresponding author of the paper mentioned above (Jarome) is publicly available, as is a
2013 paper, a
2024 paper, and a a
2016 paper. All involve memory-related rat experiments using way-too-small study group sizes, typically only about 9 animals per group. You can find this kind of defect by searching in the papers for the phrase "n =" or "n=". All four of these papers also involve reliance on
the unreliable "freezing behavior" method of trying to judge animal recall. None of the four papers involved a blinding protocol, and none of the four papers seemed to have involved any sample size calculation. The production and publication of papers like this with such defects is a long-standing disgrace in cognitive neuroscience, where badly designed studies seem to be more the rule than the exception.
The Virginia Tech press release also refers to a second study (co-authored by Jarome), and the full text of that study is available. It is a study called "Increased DNA methylation of Igf2 in the male hippocampus regulates age-related deficits in synaptic plasticity and memory," which you can read
here. We read of way-too-small study group sizes for six experiments, with an average study group size of only about 10 subjects, and none of the study groups being larger than 12 subjects. This use of way-too-small study groups is a fatal defect in the design of the study. Also, the study used the utterly unreliable method of trying to judge "freezing behavior" in rodents, which is not a reliable way to measure how well an animal remembered anything, for reasons I explain in my post
here. All correlational studies of this type relying on "freezing behavior" estimations are junk science, as are almost all studies of this type using fewer than 15 or 20 subjects per study group.
In the case of the study "Increased DNA methylation of Igf2 in the male hippocampus regulates age-related deficits in synaptic plasticity and memory," we have three indicators of junk science:
- Way-too-small study group sizes, all 12 or smaller.
- The use of unreliable "freezing behavior" judgments, not a reliable way to measure how well a rodent remembered anything.
- A lack of any blinding protocol, essential for any study of this type to be taken seriously.
So the scientists at Virginia Tech sure did not "find ways to boost memory in aging brains." There are ways to effectively boost memory in aging people, but no techniques that are accurately described as techniques for boosting memory "in brains," and no robust evidence that memories are stored in brains (which fail to yield any evidence of information a human learned, no matter how closely their tissue is microscopically studied). Ways to boost memory in aging people include things like (1) stimulants such as coffee that make a person more alert, and more inclined to learn better; (2) psychological techniques such as mnemonics, that can improve a person's ability to acquire memories and recall memories.
The study group size required for a typical neuroscience study is related to the effect size, which is small in almost all neuroscience studies. The lower the effect size, the higher the study group size needed for a reliable result. Well-designed studies make use of a sample size calculation to determine the study group size needed for good statistical power. No such calculation was done in the study "Increased DNA methylation of Igf2 in the male hippocampus regulates age-related deficits in synaptic plasticity and memory,"

The paper "Prevalence of Mixed-methods Sampling Designs in Social Science Research" (alternate link: here) has a Table 2 (shown below) giving recommendations for minimum study group sizes for different types of research. The minimum subjects for an experimental study are 21 subjects per study group. The studies mentioned above are experimental studies that used only about half of this minimum number. The "case study" type mentioned below is a different type of study in which you merely document one or a few occurrences of some condition or situation, without trying to show a cause.
Postscript: A year 2025 study co-authored by Jarome is the paper "Proteasome-independent K63 polyubiquitination selectively regulates ATP levels and proteasome activity during fear
memory formation in the female amygdala," mentioning the same K63 referred to in one of the paper titles above. Don't get the wrong idea from the mention that "Twenty-one male and 110 female 8- to 9-week-old Sprague Dawley rats were used." The real question is: how many of these rats were used for the test trying to support the claim that memory was affected? We get the answer in Figure 3E, where we learn that the study groups consisted of only about 6 rats per study group. We see the same old "freezing behavior" chart, a sign that the study used the worthless, unreliable method of trying to measure recall in rats by judging non-movement of a rat during some arbitrary time interval. The authors claim that there was a difference in memory in females, but not in males, based on a chart showing low "freezing behavior" in six female rats. But one of the many reasons for discarding all papers based on such "freezing behavior" methods is that male rats and female rats show very different levels of so-called "freezing behavior."
The paper "The Difference between Male and Female Rats in Terms of Freezing and Aversive Ultrasonic Vocalization in an Active Avoidance Test" tells us this: "We found that males were more likely to experience freezing (40%) than females (3.7%)." This very low levels of freezing behavior in females (and a level of only 40% in males) helps to make it all the more clear how badly neuroscientists are bungling by trying to measure recall or fear in rodents by trying to judge "freezing behavior," particularly whenever female rats are involved. The reported result that freezing behavior occurs very much less commonly in female rats than male rats is also reported in the year 2025 paper here, and you can see the difference in that paper by comparing Figure 1 and Figure 2.
No comments:
Post a Comment