Thursday, January 23, 2025

Folly of the "Train Them Then Dissect Them" Neuroscientists

"Train Them Then Dissect Them" is a phrase we can use to describe a particular type of animal experiment often done by neuroscientists. The experiment will work like this:

(1) Some animals (typically mice) will be trained to learn something. For example, they may be trained with the Morris Water Maze test to be able to go to a submerged platform within a water-filled tank after they are placed in such a tank. Or they may be trained over several days to keep their balance on a rotating rod, using something called a rotarod. 

(2) The animals will then be killed, and their bodies dissected, with the brain cut up into slices that can be microscopically examined. 

(3) The experimenters will look for some tiny area in the brain where they can claim to see some difference between the brains of the trained animals and a control group of animals who were not trained.  All kinds of things may be checked for.

(4) The paper will make some announcement that some tiny difference was found between the trained animals and the animals in the control group.  The reported difference might be any of 1000 different things, such as the size of dendritic spines in some tiny spot or the number of dendritic spines in some other part, or the length of synapses in some spot, or the thickness of synapses in some other part. The paper will claim that evidence has been found of "learned-induced remodeling" of the brain or "learning-induced modification of the brain." Neuroscientists will boast that they found evidence of a brain storage of memories. 

There are several reasons why these type of experiments are typically very bad examples of junk science. One reason is that you can always find hundreds of tiny little differences in two randomly chosen animals of the same species, when microscopically examining their dissected brains. So merely by showing that there is some brain difference, you do nothing at all to show that such a difference arose from the training or learning that occurred in the mice.  The same difference might have existed before the learning or training occurred. 

An example of a very poor-quality paper following this "train them then dissect them" technique is the paper "Learning-induced remodeling of inhibitory synapses in the motor cortex." Glaring defects in the paper include these:

(1) The authors failed to use adequate sample sizes, with study group sizes such as only 7 mice or only 4 mice. 

(2) The paper makes no mention of using any kind of blinding protocol, something essential for a paper of this type to be taken seriously. Neither the word "blind" nor the word "blinding" appear in the paper. The tiny differences reported in structures can easily be explained as being caused by biased ratings or biased size estimations being made  by non-blinded analysts who knew which mice were trained and which mice were not trained, and who had a motivation to estimate in a particular way, so that statistical significance could be reported. The very tiny blurry barely-visible not-very-distinct hard-to-measure things being judged for size are just the type of things where bias of  motivated and non-blinded analysts could be a big factor.  

(3) There was no pre-registration of the study committing the authors to make a small number of checks for a difference of only one specific type in only one or a few exact spots. The authors were apparently free to keep checking in a hundred different ways, until some tiny difference was found somewhere. 

(5) When the trained mice were compared to untrained mice, the control group of untrained mice was way too small, consisting of only 4 mice (Figure 2B).  15 subjects per study group (including 15 controls) is the minimum for any study like this to be taken seriously, with the required subjects probably being greater.  No mention is made of a sample size calculation, which would have revealed how inadequate the study group sizes were. 

(6) We have graphs supposedly showing some tiny difference found somewhere, but from a quick peek at the graphs you won't even notice any difference. 

Even if the paper had shown a difference much larger, it would not prove anything, because anyone microscopically examining two randomly selected brains of an animal will always be able to find little differences here and there. It is never clear or probable that such differences occurred because one set of mice got training that the others did not. No good evidence of brain-stored memories is ever produced by such studies. The people who do such junk science experiments are needlessly killing mice. 

lack of blinding protocol
Without a blinding protocol, it's a big joke

I can give a description of how an experiment of this type could be done so that it would meet at least some of the requirements of robust research. 

(1) There would be adequate study group sizes, such as maybe 30 mice in the group to be trained, and 30 mice in the control group. 

(2) The study would be pre-registered, so that there would be a commitment to gathering data and analyzing data in a specific, limited way. For example, the specification of the pre-registration document might state that exactly 25 microscope-readable slices would be taken from the same region of each mouse, such as the hippocampus or the motor cortex, and that the study would only analyze one parameter, such as the quantity of dendritic spines.   

(3) Each slice would be put in an envelope that had a subject number, and an indication of whether the mouse had been trained or not. 

(4) A simple computer programs would be written that would have two functions: (a) the ability to generate a 7-digit random number and the ability to store in a text file a supplied subject number, a "trained" indicator of either Y or N,  and that generated 7-digit number; (b) the ability to retrieve that subject number and its "trained" indicator when supplied the 7-digit random number. This is an elementary programming task.

(5) The program would be used to generate random numbers that would be written on envelopes.  Each envelope containing a slice of brain tissue and a subject number would be replaced with an envelope containing one of the random numbers generated by the program.

(6) You would now have a set of envelopes marked only with random 7-digit numbers that a human could not recognize. Such a set of envelopes could be shuffled, and given to microscope analysts. Such analysts would thereby be completely blind to whether the tissue slices belonged to mice that had been trained or mice that had not been trained. There would be no chance of some bias effect in which an analyst tended to analyze trained mice differently. The microscope-using analysts would look for differences in tissue, using only the limited hypothesis to be tested that was stated in the pre-registration document.  So, for example, if that document said that only the thickness of synapses would be analyzed, then only that one thing would be analyzed. 

(7) After the microscopic analysis had been completed, and an analysis report form put in each of the envelopes, the computer program could be used to retrieve the original subject numbers and training status corresponding to each envelope. So, for example, someone holding an envelope with a random number of 4477353 might type in that number to the computer program, and get a reply of "Subject #21, Trained" or "Subject #35, Not Trained," with the answer retrieved from the text file previously made by the program.  The answer could be written on each envelope. 

(8) Then the data could be tallied up to see whether there was any difference between the characteristics of the trained mice and the untrained mice.

(9) Since the experiment would strictly adhere to the protocol of the original pre-registration design document, there would be no chance that the final analysis would include fewer or more brain slices than specified in that document. 

That would be a decent design for an experiment of this type. No design like this is followed by the vast majority of these "train them then dissect them" experiments. Typically such experiments make no use of blinding at all. Any difference in the reported characteristics can be explained by either pure chance variation or bias of the microscopic data analyst, motivated to report some difference. 

A paper such as "Learning-induced remodeling of inhibitory synapses in the motor cortex" tries to suggest that learning of a motor skill is stored as changes in dendritic spines. There is a reason why such a hypothesis makes no sense. Dendritic spines are very unstable things. 

dendritic spine

 The 2015 paper "Impermanence of dendritic spines in live adult CA1 hippocampus" states the following, describing a 100% turnover of dendritic spines within six weeks:

"Mathematical modeling revealed that the data best matched kinetic models with a single population of spines of mean lifetime ~1–2 weeks. This implies ~100% turnover in ~2–3 times this interval, a near full erasure of the synaptic connectivity pattern."

The paper here states, "It has been shown that in the hippocampus in vivo, within a month the rate of spine turnover approaches 100% (Attardo et al., 2015; Pfeiffer et al., 2018)." The 2020 paper here states, "Only a tiny fraction of new spines (0.04% of total spines) survive the first few weeks in synaptic circuits and are stably maintained later in life."  The author here is telling us that only 1 in 2500 dendritic spines survive more than a few weeks.  Given such an assertion, we should be very skeptical about the author's insinuation that some very tiny fraction of such spines "are stably maintained." No one has ever observed a dendritic spine lasting for years, and the observations that have been made of dendritic spines give us every reason to assume that dendritic spines do not ever last for more than a few years. Conversely, human knowledge and human motor skills can last for 50 years or more, way too long a time to be explained by changes in dendritic spines or synapses, both of which change too much and too frequently to be a stable storage place for human memories. 

The failure of neuroscientists to listen to what dendritic spines are telling us is epitomized by a 2015 review article on dendritic spines, which states, "It is also known that thick spines may persist for a months [sic], while thin spines are very transient, which indicate that perhaps thick spines are more responsible for development and maintenance of long-term memory."  It is as if the writers had forgotten the fact that humans can remember very well  memories that last for 50 years, a length of time a hundred times longer than "months." 

2019 paper documents a 16-day examination of synapses, finding "the dataset contained n = 320 stable synapses, n = 163 eliminated synapses and n = 134 formed synapses."  That's about a 33% disappearance rate over a course of 16 days. The same paper refers to another paper that "reported rates of [dendritic] spine eliminations in the order of 40% over an observation period of 4 days." A paper studying the lifetimes of dendritic spines in the cortex states, "Under our experimental conditions, most spines that appear survive for at most a few days. Spines that appear and persist are rare."

The 2023 paper here gives the graph below showing the decay rate of the volume of dendritic spines. It is obvious from the graph that they do not last for years, and mostly do not even last for six months. 


Page 278 of the same paper says, "Two-photon imaging in the Gan and Svoboda labs revealed that spines can be stable over extended periods of time in vivo but also display genesis (generation) and elimination (pruning) at a frequency of 1–4% per week." Something vanishing at a rate of 2% per week will be gone within a year. Discussing the motor cortex, the paper here says, "We found that 3.5% ± 1.2% of spines were eliminated and 4.3% ± 1.3% were formed in motor cortex over 2 weeks (Figures 3J, 3K, and 3O; 224 spines, 2 animals)." An elimination rate of 3.5% over two weeks would result in 90% elimination over a length of one year. 

The 2022 paper "Stability and dynamics of dendritic spines in macaque prefrontal cortex" studied  how long  dendritic spines last in a type of monkey. It says, "We found that newly formed spines were more susceptible to elimination, with only 40% persisting over a period of months." The same study found that "the percentage of elimination for pre-existing spines over 7 days was only 6% on average," which is a rate that would cause complete disappearance of pre-existing dendritic spines within a year. Dealing with a type of monkey, the 2015 paper "In Vivo Two-Photon Imaging of Dendritic Spines in Marmoset Neocortex" tells us that "The loss or gain rate at the 1 d  [one day] interval observed in this study was similar to those in previous studies of layer 5 neurons of the somatosensory cortex of transgenic mice (12% in 3 d [ 3 days] for both loss and gain; Kim
and Nabekura, 2011) and layer 2/3 neurons of ferret V1 by the virus vector method (4% in 1 d [ 1 day] for both loss and gain; Yu et al., 2011)."  The reported loss of dendritic spines is a rate that would cause 100% loss within a year. 

Most synapses are attached to dendritic spines, so all of these findings about the instability and short lifetimes of dendritic spines are also findings about the instability and short lifetimes of synapses. Both synapses and dendritic spines are way, way too unstable to be a credible storage place for human memories that can last for decades. There is no place in the brain that can be reasonably postulated as a storage place allowing memories to persist for decades. 

dumb neuroscientist teachings

No comments:

Post a Comment