Saturday, December 27, 2025

LA Times Writer Falls "Hook, Line and Sinker" for the Groundless Legend of Neuroscientist Memory Manipulation

 "Exaggerated claims and low levels of reproducibility are commonplace in psychology and cognitive neuroscience, due to an incentive structure that demands 'newsworthy' results."  -- Psychologist Richard Ramsey (link). 

When a writer is not a careful scholar of the innumerable shortfalls, bad research methods and Questionable Research Practices so very abundant in today's neuroscience research, such a writer may be a pushover for the groundless boasts of a glory-hounding researcher. Such a writer may write a "puff piece" that signals the writer's lack of critical scrutiny when reporting on weak scientific research. The latest example of such a writer is Corinne Purtill, a staff writer for the LA Times, who has written a very misleading puff piece about the neuroscientist researcher Steve Ramirez. It has the nonsensical title "‘Memory manipulation is inevitable’: How rewriting memory in the lab might one day heal humans." 

We have from Purtill this entirely false claim with no basis in fact: "But over the last two decades, neuroscientists have found mind-bending ways to control this process (in mice, at least): implanting false memories, deleting real ones, resurrecting memories thought lost to brain damage, detaching the memory of an emotional reaction to one event and attaching it to the memory of another." These claims have been made in neuroscience papers, but none of those papers described robust research. All of the studies described by such papers were examples of very low-quality research, guilty of multiple types of Questionable Research Practices, such as the use of way-too-small study group sizes. 

We have in the article boastful quotes from neuroscientist Steve Ramirez, quotes boasting that scientists can manipulate memories by fiddling with brains.

Purtill gives us this account, which is not justified by what the paper reports:

'Three years earlier, a University of Toronto team identified the neurons that lit up when a mouse was exposed to a scary stimulus — in this case, a sound that earlier accompanied a shock. The Toronto researchers then injected the mice with a toxin that killed only those brain cells that lit up when they heard the sound. The result: The treated mice no longer demonstrated a fear response when the sound was played. Essentially ,the scientists had erased a specific memory."

No, they did not do any such thing. The link Purtill has given is to  the 2012 paper "Selective Erasure of a Fear Memory," a bad example of low-quality neuroscience research, guilty of multiple forms of Questionable Research Practices. Reliable neuroscience research requires study group sizes of at least 15 or 20 subjects per study group, but that did not go in this research. 

In Figure 1 of the paper we see a larger-than average sample size was used for two groups (17 and 24), but that a way-too-small sample size of only 4 was used for the corresponding control group. For reliable rodent neuroscience research, you need a sufficiently high number of animals in all study groups, including the control group. The same figure tells us that in another experiment the number of animals in the study group were only 5 or 6, which is way too small. Figure 3 tells us that in other experiments only 8 or 9 mice were used, and Figure 4 tells us that in other experiments only 5 or 6 mice were used. So this paper is guilty of using way-too-small study group sizes. No mention is made in the paper of any blinding protocol, a necessity for a study of this type to be taken seriously as reliable research.  The paper relies heavily on judgments of fear in rodents, made by the utterly unreliable technique of judging "freezing behavior." All rodent research studies using this unreliable technique are examples of junk science, for reasons explained at length in my post here

Next Purtill gives us this account:

"For their experiment, the pair identified brain cells in a mouse hippocampus that activated when the animal received a startling shock. Then they took the mouse out of the enclosure where the shock occurred and placed it in a new box with no sights or other sensory cues associated with the memory of its old environment. Next, using millisecond-long pulses of light, they activated those same brain cells — without the physical shock of the earlier stimulus.

The mouse acted exactly as it had when the shock happened, even though no shock occurred. You can’t interview a mouse about its memories. Researchers base their conclusions on the animal’s behavior. And in this case, it appeared that they’d turned a memory on."

The link is to the paper “Optogenetic stimulation of a hippocampal engram activates fear memory recall,” which is another low-quality paper guilty of multiple types of Questionable Research Practices.  We see in Figure 3 of that paper that inadequate sample sizes were used. The number of animals listed in that figure (during different parts of the experiments) are 12, 12, 12, 5, and 6, for an average of 9.4. That is not anything like what would be needed for a moderately convincing result, which would be a minimum of 15 or 20 animals per study group. The experiment relied crucially on judgments of fear produced by manual assessments of freezing behavior, which were not corroborated by any other technique such as heart-rate measurement. Such attempted judgments of "freezing behavior" are not a reliable method for judging whether a rodent is afraid or whether a rodent remembers anything. The study does not describe in detail any effective blinding protocol, The study involved stimulating certain cells in the brains of mice, with something called optogenetic stimulation. The authors have assumed that when mice freeze after stimulation, that this is a sign that they are recalling some fear memory stored in the part of the brain being stimulated. What the authors neglect to tell us is that stimulation of quite a few regions of a rodent brain will produce freezing behavior. So there is actually no reason for assuming that a fear memory is being recalled when the stimulation occurs. We have no robust evidence that any such thing as an activation of fear memory recall has occurred. 

It is laughable to hear Purtill's narrative about "a mouse" here, as if a reported response of "a mouse" was good evidence. As a general rule, experiments with rodents don't count for anything unless an effect is shown with at least 15 or 20 subjects, and almost always 15 subjects is too small a study group. 

sample size and effect size

We then have a quote from Sheena Josselyn claiming that "when you can do those sorts of things to memories, you know you have found the neural basis of a memory." Josselyn is not to be trusted on this matter.  If you look up Josselyn's papers on Google Scholar, you will fail to find any original research by her or anyone else that provides robust evidence to support her claims about memories being found in rodent brains. What you will be most likely to find are bad examples of low-quality research guilty of Questionable Research Practices, such as way-too-small study group sizes, and the use of unreliable techniques for judging whether a mouse recalled, typically the utterly unreliable method of trying to judge whether "freezing behavior" occurred. An example is her low-quality 2024 paper here, which used way-too-small study group sizes such as only 8 or 9 rodents, and also used the utterly unreliable method of trying to judge recall in rodents by judging "freezing behavior." Another paper by her (the 2023 paper here) has the same defects, and is just as poor quality. 

We then have in the LA Times article by  Purtill another grand account of neuroscientist work, one that ends with the claim "the scientists had created a false memory, another seminal feat."  The claim is as untrue as her previous narratives of groundless scientist boasts. The link is to this paper by the MIT memory lab, with the grandiose title “Creating a False Memory in the Hippocampus.” When we look at Figure 2 and Figure 3, we see that the sample sizes used were paltry: the different groups of mice had only about 8 or 9 mice per group. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. No convincing evidence has been provided of creating a false memory. Once again, we have a study that is completely dependent upon an unreliable technique for attempting to measure memory in rodents, the technique of trying to judge "freezing behavior." All neuroscience studies relying on that technique are junk science, for reasons explained at length in my post here

Finally, Purtill tells us another narrative glorifying Ramirez, citing as her source the paper "Activating positive memory engrams suppresses depressionlike behavior." It's another example of low-quality research work from Ramirez and his co-authors. The study group sizes used are mostly much smaller than 15, and no experimental neuroscience paper using fewer than 15 subjects per study group should be taken seriously. We read of way-too-small study group sizes such as only 3, only 5, only 12, and only 9. You may realize how low-quality this research is when you consider that it is research involving mice. Any claim to have suppressed  "depressionlike behavior" in mice is laughable, because of the difficulty of reliably verifying or gauging sadness in mice. You can reliably detect fear in mice, by measuring heart rates, which spike very dramatically when mice are afraid. Rather than using this very reliable technique for measuring fear in mice, neuroscientists senselessly prefer the unreliable technique of trying to judge "freezing behavior," probably because unreliable measurement techniques increase the chance of false alarms that can be leveraged to help get "publishable" results. 

Writing in this article like a pushover cheerleader science journalist fawning over a boasting scientist, and believing his unfounded boasts "hook, line and sinker," Purtill has apparently failed to apply any critical scrutiny to the low-quality research she is citing. Instead of producing this type of misleading puff piece, she should study the characteristics that distinguish high-quality neuroscience research from low-quality research guilty of Questionable Research Practices. One of those characteristics is the presence of a sample size calculation, in which a researcher does a calculation to justify that the sample size and study group sizes that he used were adequate to provide a decent statistical power. None of the papers that Purtill has referenced has any sample size calculation. That's a big "red flag" that Purtill ignored. 

red flags of bad science research

"
Articles published during the past decade bemoaning the inability of mainstream neuroscience to generate replicable or even reproducible outcomes are too many to count.......If we weren't living it, it would be hard to imagine how a research culture could have strayed so far from the path of rationality as has the culture of neuroscience. Fundamental problems in theory and method have long been flagged (e.g. Teller, 1984; Jonas & Kording, 2018; Brette, 2019), but critiques have left barely a trace on the hard-beaten track of routine, mainstream practice" -- A vision scientist, 2023 (link). 

Appendix: Let us look at the "Freezing Percentage" graph typically shown in studies of this type. The graph will typically look like this:


What is being graphed here? The graphs are made after observing mice during some arbitrary short time period such as 30 seconds or 60 seconds or 90 seconds or 2 minutes or 3 minutes, and simply trying to record what percentage of the time the mouse was immobile. The label "freezing behavior" is a loaded, subjective term. You cannot tell from mere non-movement whether a mouse was afraid or whether a mouse recalled anything. 

Contrary to minimal standards of good scientific research, such graphs typically fail to mention the length of time over which this non-movement occurred, as if the researcher were trying to hide something.  If graphs of this type were decently done, they would be labeled like this:

Can we tell anything about whether a mouse was afraid or whether he remembered better or worse from the percentage of time that the mouse stays immobile during such short time period such as 90 seconds? We cannot. The assumption that such graphs tell us anything about mouse recall or mouse fear is a silly, erroneous assumption. Living for many years in a New York apartment in which I would many times see a mouse and shriek loudly, I never once saw a mouse "freeze in fear" upon experiencing this fearful stimulus. Inevitably the mouse just ran away. You cannot tell anything about whether a mouse remembered or was afraid from the percentage of time the mouse is immobile during some time interval. 

When "freezing behavior" judgments are made, there are no standards in regard to how long a length of time an animal should be observed when recording a "freezing percentage"  (a percentage of time the animal was immobile). An experimenter can choose any length of time between 30 seconds and five minutes or more (even though it is senseless to assume rodents might "freeze in fear" for as long as a minute).  Neuroscience experiments typically fail to pre-register experimental methods, leaving experimenters to make analysis choices "on the fly." So you can imagine how things work. An experimenter might judge how much movement occurred during five minutes or ten minutes after a rodent was exposed to a fear stimulus. If a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 30 seconds, then 30 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 60 seconds, then 60 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over two minutes, then two minutes would be chosen as the interval to be used for a "freezing percentage" graph. And so on and so forth, up until five minutes or ten minutes. If the researcher still has no "more freezing" effect he can report, the researcher can always do something like report on only the last minute of a larger time length, or the last two minutes, or the last three minutes, or the last four minutes. Because there are 10 or 12 different ways in which the data can be analyzed, each with about a 50% chance of success, the likelihood of the researcher being able to report some "higher freezing level" is almost certain, even if the tested interventions or manipulations had no real effect on memory. Such shenanigans drastically depart from good, honest, reliable experimental methods, and any researcher engaging in such shenanigans should be ashamed of himself. 

In order to give a little bit of reassurance that such shenanigans are not occurring in the worst way, it is essential that every scientific paper providing a "freezing percentage" graph should always at least tell us what the time interval used was when such an estimation of "freezing behavior" was made. Astonishingly, most papers providing such "freezing behavior" charts fail to even specify the time interval corresponding to such charts.  So we will get again  and again charts claiming that some percentage of "freezing behavior" occurred over some time interval, but we are usually not even told what the time interval was. This is experimental science at its clumsiest and most dysfunctional. Of course, by failing to specify the time interval used, a researcher makes it easier to hide his malfeasance if he arbitrarily uses different time intervals in different places in his analysis, in order to gin up more convincing "freezing behavior" charts, or if the researcher uses some arbitrary time interval (chosen to yield more pleasing results) different from the time interval most commonly used when such "freezing behavior" judgments are made. And the more researchers fail to specify the time interval used when making such "freezing behavior" judgments, the harder it is to tell that researchers are not following any research standard, but are simply analyzing using whatever time interval leaves them with the result that produces the more convincing freezing-behavior charts. 

"Freezing behavior" charts are a sign of junk science. Every paper relying on such charts should be dismissed as junk science.  There are reliable ways to measure whether a mouse recalled or feared something. One way is to measure heart rate, which very reliably increases dramatically when mice are afraid. Another reliable way to measure recall or fear in mouse is to use a fear stimulus avoidance method, illustrated in the chart below. 

good technique for measuring recall in mice

For more than 12 years, it has been very easy for almost anyone to create hour-long videos, and upload them to www.youtube.com. It would be very easy for any scientist  claiming "different percentages of freezing behavior" in two groups (an experimental group and a control group) to document such a claim by creating an hour-long video and uploading that video to Youtube.com, so that anyone could check by looking at a Youtube link provided in the paper. Such a video would simply show how each mouse in the experimental group responded during some two-minute or 90-second period matching the displayed "freezing behavior" chart, and also how each mouse in the control group responded during some two-minute or 90-second period matching the displayed "freezing behavior" chart. Although such videos would be very easy to make and upload to Youtube.com, we never see neuroscience papers providing such links. This is probably because those producing such papers with "Freezing %" charts do not want independent observers to be able to check on their work in making such charts, which will not hold up well to scrutiny. Claims in "big boast" science papers of greater or smaller "freezing behavior" should never be trusted unless such a Youtube.com link (or an equivalent link) is provided. Similarly, you should never trust any person today who claims the ability to levitate, if the person does not even provide a video showing this claimed ability.  

No comments:

Post a Comment