Thursday, January 30, 2025

This Neuroscientist Trying to Explain Minds Sounds Empty-Handed

When neuroscientists are interviewed, we never seem to get interviewers asking the kind of questions they should ask of people who claim to know very huge and grand things but who do not have observations or reasoning backing up such boasts.  Why is it that people interviewing neuroscientists always seem to pitch only the softest of softball questions? It's rather like the situation imagined below:

An example of such an interview is the recent Quanta Magazine interview  of neuroscientist Anil Seth. Seth has some book he is promoting, one apparently claiming to offer some explanation of minds. The book has the pretentious title Being You: A New Science of Consciousness. But in the interview Seth does not sound like anyone who has any credible idea of how to explain human minds. What we get is the most vacuous hand-waving, combined with circuitous not-very-relevant digressions. Seth has a chance here to persuade us that he has something like some kind of substantial idea of how to explain minds. He completely fails at such a thing, leaving us with the impression of someone empty-handed when it comes to mind explanation. 

Early in the article, Seth starts out by making a groundless triumphal boast he does nothing to substantiate. He says this: 

“I always get a little annoyed when I read people saying things like, ‘Chalmers proposed the hard problem 25 years ago’ … and then saying, 25 years later, that ‘we’ve learned nothing about this; we’re still completely in the dark, we’ve made no progress,’” said Seth. “All this is nonsense. We’ve made a huge amount of progress.”

No, neuroscientists have not done anything to credibly explain human minds; and we have no examples from Seth of progress that was made. 

We then have Seth and his interviewer engaging in the vapid word trick of consciousness shadow-speaking. The trick involves acting as if there is a mere "problem of consciousness" rather than an almost infinitely greater problem of explaining human minds. A human being is not merely "some consciousness." A human being is an enormously complex reality, and the mental reality is as complex as the physical reality.  You dehumanize and degrade human beings when you refer to their minds as mere "consciousness." The problem of human mentality is the problem of credibly explaining the thirty or forty most interesting types of human mental experiences, human mental characteristics and human mental capabilities.  Instead of just being  "some consciousness," human minds have a vast variety of mental capabilities and mental experiences such as these:

  • imagination
  • self-hood
  • abstract idea creation
  • appreciation
  • memory formation
  • moral thinking and moral behavior
  • instantaneous memory recall
  • instantaneous creation of permanent new memories
  • memory persistence for as long as 50 years or more
  • emotions
  • desire
  • speaking in a language
  • understanding spoken language
  • creativity
  • insight
  • beliefs
  • pleasure
  • pain
  • reading ability
  • writing ability
  • mental illness of many different types
  • ordinary awareness of surroundings
  • visual perception
  • recognition
  • planning ability
  • auditory perception
  • attention
  • fascination and interest
  • the correct recall of large bodies of sequential information (such as when someone playing Hamlet recalls all his lines correctly)
  • eyes-closed visualization
  • extrasensory perception (ESP)
  • dreaming
  • pattern recognition
  • social abilities
  • spirituality
  • philosophical reasoning
  • mathematical ability
  • volition
  • trance phenomena
  • exceptional memory such as hyperthymesia
  • extraordinary calculation abilities such as in autistic savants
  • out-of-body experiences
  • apparition sightings 

It is always a stupid trick when someone tries to reduce so complex a reality to try and make it sound like the faintest shadow of what it is, by speaking as if there is a mere "problem of consciousness," and talking as if humans are just "some consciousness" that needs to be explained.  Such a trick (which can be called consciousness shadow-speaking) is as silly as ignoring the vast complexity of the organization of the human body, and speaking as if explaining the origin of human bodies is just a task of explaining how there might occur "some carbon concentrations." 

After some dehumanizing nonsense in which Seth describes humans as machines or animals, Seth offers this utterly vacuous attempt at mind explanation:

"I eventually get to the point that consciousness is not there in spite of our nature as flesh-and-blood machines, as Descartes might have said; rather, it’s because of this nature. It is because we are flesh-and-blood living machines that our experiences of the world and of 'self' arise."

Nothing of any substance is said here, and we have three clues that the speaker has gone way, way wrong. The first is the dehumanizing nonsense of referring to humans as machines. The second is the denialist nonsense of referring to the human self in quotation marks, as if Seth does not believe such a thing exists. Then there's the ridiculous idea that describing a human as a machine could somehow shed light on why humans are conscious. Machines are not conscious. 

Humans are not animals, and the habit of calling humans animals is a senseless and morally harmful speech custom of biologists, who should have classified the human species in a separate human kingdom rather than classifying the human species in an animal kingdom. Humans are not machines, and human bodies are not machines. One of the many differences between a machine and a human body is that machines do not continually replace their constituent components, but human bodies and human brain continually replace their protein parts. I can understand why Seth would be inclined to forget about human brains constantly replacing their protein parts, because it is one of many facts that discredit the claims of people such as Seth that the brain is the storage place of memories.  The average lifetime of brain proteins is roughly 1000 times shorter than the maximum length of time that humans can remember things (more than 50 years).  You wouldn't remember anything more than a few weeks if your memories were stored in your brain. 

Next in the Quanta Magazine interview we have a mention of IIT theory (integrated information theory), and Seth says that he finds "bits of IIT promising, but not others." He says, "The parts of IIT that I find less promising are where it claims that integrated information actually is consciousness." Seth wastes several paragraphs  of his interview talking about some IIT theory that he does not endorse, so that part of the interview does nothing to help Seth give the reader the impression that he has any explanation for human minds. 

We then have Seth talking for three paragraphs about a "free energy principle."  Filled with circumlocutions, the discussion sounds like irrelevant hand-waving, and sounds like nothing bearing any relevance to an explanation of minds.  Seth then proceeds to state a false-as-false-can-be misstatement that makes him sound like a poor scholar of biology. He says, " Historically, there is a commonality between the apparent mysteries of 'life' and of 'heat,' which is that both eventually went away — but they went away in different ways."

Nothing could be more false than the claim that the mysteries of life "went away." The more scientists have learned about the vast levels of organization and the stratospheric heights of functional complexity and information-rich interdependence in living things, the greater the mysteries of life have grown. Scientists have no credible explanation for the origin of life.  Scientists have no credible explanation for the origin of the protein molecules or the cells in human bodies. Scientists cannot even explain how a human cell is able to reproduce. The progression from a speck-sized zygote to the state of vast hierarchical organization that is the human body is a miracle of organization a thousand miles over the heads of scientists. Claims that such a progression occurs by a reading of a body blueprint in DNA are false -- DNA does not have any specification of how to build a human body or any of its organs or any of its cells.  

When we come to human minds, the mysteries of life are endless. Neuroscientists lack any credible explanation for learning, the lifelong preservation of memories, the ability of humans to instantly recall, and the ability of humans to think, imagine and believe.  When neuroscientists try to give explanations for such things, they give us sound bites that are the vaguest hand-waving, sound bites such as "synapse strengthening" which cannot be the correct explanation, given how humans can form memories instantly (much faster than synapses can strengthen), and given how unstable synapses are (consisting of proteins with an average lifetime of only a few weeks or less).  

current state of developmental biology

problems scientists cannot solve
Problems a hundred miles over our heads

 Seth then proceeds to make this false-as-false-can-be claim: "The problem of life wasn’t solved; it was 'dissolved.' " No such thing occurred.  There are still innumerable gigantic unsolved and unanswered problems of biology. The more we have learned about the facts of biology and the sky-high levels of organization, fine-tuning and information-rich functional complexity everywhere in human bodies, the bigger the explanatory problems have grown. 

boasting scientist

 Seth makes a very misguided attempt to  insinuate that the problem of explaining minds may be solved like the problem of heat was solved.  In the nineteenth century scientists did not understand heat, and thought that maybe heat was some substance (called caloric) which existed more abundantly the hotter things were. Later this idea was discarded, and scientists started to understand that heat is just a measure of the speed at which molecules in some object are moving.  But there is no reason to think that such a thing has any relevance to explaining minds. Heat was always a very simple property, something that can be expressed by a single number (a temperature). But a mind is not a property. A mind is a reality of very great diverse complexity, a wealth of very different and diverse capabilities and vastly varying experiences, something vastly too complex and diverse to ever be expressed by a single number.  And heat is a physical thing, but a mind is not a physical thing. 

There's another reason why it is senseless for Seth to try to insinuate that the explanation of heat has some relevance to an explanation of minds. Heat was explained by an explanation of molecule motion, the idea that heat is really how fast molecules are moving around in a substance.  But the neurons which Seth thinks are an explanation for minds are not things in motion. Neurons are attached to dendrites in a way that makes each neuron as immobile as a tree. 

Then we have this little bit of hand-waving vacuity from Seth:

"Here, it’s going to be a case of saying consciousness is not this one big, scary mystery, for which we need to find a humdinger eureka moment of a solution. Rather, being conscious, much like being alive, has many different properties that will express in different ways among different people, among different species, among different systems. And by accounting for each of these properties in terms of things happening in brains and bodies, the mystery of consciousness may dissolve too."

There's no explanation there, just a promissory note that has no credibility, combined with misleading language about properties. Brains have been analyzed to death, and brain tissue has been exhaustively examined with the most powerful microscopes capable of photographing the tiniest brain structures. But no one has found in brains anything that can explain the capabilities of minds. There's no reason to think that further analysis of brains will do anything to explain minds. Also, a human mind is not some mere set of properties, but something enormously more than that. The human mind is largely an enormously impressive set of capabilities and experiences. You can't explain capabilities and experiences by merely explaining properties. A property is some simple aspect of a physical thing that can be expressed with a single number, something like height, mass, width, depth or volume.  There is no possible set of property explanations that can add up to be an explanation of a human mind, or even a tenth of an explanation of a human mind. 

Earlier in the interview Seth has made this gigantically misleading statement: " Life is a constellation concept — a cluster of related properties that come together in different ways in different organisms." No, physically speaking, a human body is something enormously more than some mere "cluster of related properties." A human body is state of enormously organized matter, purposefully arranged in an accidentally unachievable manner to perform enormously complex and hard-to-achieve functions such as respiration, blood circulation, locomotion, visual perception and digestion. Depicting a human body as a "cluster of related properties" is as misleading as depicting an aircraft carrier as a "cluster of related properties." 

reductionist nonsense

The interviewer asks about Seth's goofy claim that perception is a "controlled hallucination" and Seth wanders in circles for four paragraphs without ever justifying this bizarre phrase.  The interview then ends with Seth addressing questions about whether machines will ever be conscious, but not in any way that persuades us that he has any explanation for human minds. 

The interview leaves us with the impression that neuroscientist Anil Seth is quite empty-handed in regard to explaining minds. Seth went  round and round  in circles, engaging in the most vacuous circumlocutions. Nothing that he has said gives us any good reason for suspecting that he has any real understanding of how minds arise or how humans have the mental capabilities that they have. Seth's interview reminds me of what a dead end is the brain explanation for human minds. 

You won't get anything any better if you read the prologue of Seth's new book, along with about 25 pages of the book, available on Google Books using the link here. In those pages, Seth sounds every bit as empty-handed as a mind explainer as he does in the Quanta Magazine interview, and seems to tell us nothing of any value in explaining human minds or human mental phenomena. He nonsensically promises on pages 7 to 8 that by the end of the book you'll understand your consciousness is a "controlled hallucination."  That's a very silly thing to say, given that hallucinations are sensory experiences of things that are not really there, and that our normal conscious experience is of seeing things that are there. 

It seems there is also basically "no there there" in Seth's 25-page paper "Being a beast machine: the somatic basis of selfhood." It seems like a vacuous empty-handed running-around-in-circles mess of doubletalk, gobbledygook and jargon. In the abstract Seth says "we describe embodied selfhood in terms of ‘instrumental interoceptive inference,' " an utterance that makes no sense given that 99% of the time you are a self, you are not making any inferences. Inference is a relatively rare activity of a human mind. Things are not at all clarified when Seth later tells us that "instrumental interoceptive (active) inference involves descending interoceptive predictions being transcribed into physiological homeostasis by engaging autonomic reflex arcs." The lack of a "somatic basis of selfhood" is shown by out-of-body experiences (experienced by a significant fraction of the population) in which human selves vividly observe their bodies from outside of their bodies, such as two meters away, something that would never occur if your body was the basis of your self. 

The paper's claim that "the brain embodies and deploys a generative model which encodes prior beliefs" is without any foundation in neuroscience observations. No one has ever found any evidence of models or beliefs stored in a brain, and  no one has any detailed credible theory of how a belief could ever be stored or encoded in a brain or maintained for decades in a brain that undergoes such very high molecular turnover and synaptic turnover. The observational neuroscience here is so nonexistent that while scientists have a word ("engram") for the unwarranted speculative idea of memories stored in brains, neuroscientists do not even have a word meaning a place where a brain stores a belief. 

Part of what's going on is that Seth seems to be using the illegitimate trick of describing a self that has sensory experiences, and then trying to suggest that such sensory experiences explain a self. The fallacy is that self-hood occurs just fine without such sensory experiences. People my age often lie awake in bed at night for an hour or more, in silent darkness, with only negligible sensory experiences. During such an hour my self-hood is still going full blast, as I think, remember, plan and engage in self-evaluation, insight and introspection. Sensory experiences are things that occur to selves, but not any explanation for self-hood itself. Self-hood is something we should never expect to occur from brains. There is no reason why chemical or electrical activity from a set of billions of neurons connected by unreliably transmitting synapses would ever give rise to a unified sense of self, just as there is no reason why 10 million named Smith scattered around the world would ever have a unified identify as a single Smith presence in the world. 

standard explanations of neuroscientists


 In the Quanta Magazine interview Seth suggested that the topic of heat may have some relevance in explaining minds. There is actually a way that a discussion of heat and sunlight can give you a good hint about explaining minds, but it is a way that is completely different from anything Seth mentioned. The way I refer to was  described in my 2020 post "Your Physical Structure Did Not Arise Bottom-Up, So Why Think Your Mind Did?" Here is how I used a discussion of heat and sunlight to try to point the reader in the right direction on the topic of explaining minds:

"To gain some insight on how we have been conditioned or brainwashed to favor a bad type of explanation for our physical structure and minds, let us consider a hypothetical planet rather different from our own: a planet in which the atmosphere is much thicker, and always filled with clouds that block the sun.  
Let's give a name to this perpetually cloudy planet in another solar system, and call this imaginary entity planet Evercloudy.  Let's imagine that the clouds are so thick on planet Evercloudy that its inhabitants have never seen their sun.  The scientists on this planet might ponder two basic questions:

(1) What causes daylight on planet Evercloudy?
(2) How is it that planet Evercloudy stays warm enough for life to exist?

Having no knowledge of their sun, the correct top-down explanation for these phenomena, the scientists on planet Evercloudy would probably come up with very wrong answers. They would probably speculate that daylight and planetary warmth are bottom-up effects.  They might spin all kinds of speculations such as hypothesizing that daylight comes from photon emissions of rocks and dirt, and that their planet was warm because of heat bubbling up from the hot center of their planet.  By issuing such unjustified speculations, such scientists would be like the scientists on our planet who wrongly think that life and mind can be explained as bottom-up effects bubbling up from molecules. 

Facts on planet Evercloudy would present very strong reasons for rejecting such attempts to explain daylight and warm temperatures on planet Evercloudy as bottom-up effects. For one thing, there would be the fact of nightfall, which could not easily be reconciled with any such explanations. Then there would be the fact that the dirt and rocks at the feet below the scientists of Evercloudy would be cold, not warm as would be true if such a bottom-up theory of daylight and planetary warmth were correct.  But we can easily believe that the scientists on planet Evercloudy would just ignore such facts, just as scientists on our planet ignore a huge number of facts arguing against their claims of a bottom-up explanation for life and mind (facts such as the fact that people still think well when you remove half of their brains in hemispherectomy operations, the fact that the proteins in synapses have very short lifetimes, and the fact that the human body contains no blueprint or recipe for making a human, DNA being no such thing). 

We can imagine someone trying to tell the truth to the scientists on planet Evercloudy:

Contrarian: 'You have got it very wrong. The daylight on our planet and the warmth on our planet are not at all bottom-up effects bubbling up from under our feet.  Daylight and warmth on our planet can only be top-down effects, coming from some mysterious unseen reality beyond the matter of our planet.'
Evercloudy scientist:  'Nonsense! A good scientist never postulates things beyond the clouds. Such metaphysical ideas are the realm of religion, not science. We can never observe what is beyond the clouds.'

Just as the phenomena of daylight and planetary warmth on planet Evercloudy could never credibly be explained as bottom-up effects, but could only be credibly explained as top-down effects coming from some mysterious reality unknown to the scientists of Evercloudy, the phenomena of life and mind on planet Earth can never be credibly explained as bottom-up effects coming from mere molecules, but may be credibly explained as top-down effects coming from some mysterious unknown reality that is the ultimate source of life and mind." 

top-down explanation of mind

Brains do basically nothing to explain minds and memory. But by very deeply studying the facts of neuroscience (not to be confused with the opinions of neuroscientists), particularly the many physical shortfalls of all brains, while simultaneously making a deep study of all human mental capabilities and every type of human mental experience (commonplace and spooky), you can gain some insights that can lead you in the right direction. If you combine such studies with a very deep study of why human bodies cannot be explained by mechanistic ideas, and how the biosphere is filled with endless wonders of fine-tuned organization and information-rich innovation that cannot be credibly explained as accidents of nature, and a deep study of cosmic fine-tuning, you will as a result of such studies probably start taking  some steps in the right direction in regard to better understanding your mind.  This is all a business requiring very much deep thought and very much broad reading on a wide variety of topics, a great deal of effort. Lazy shallow language tricks such as talking about mountain-sized problems effortlessly "dissolving" won't get you anywhere, nor will lazy deceits of shadow-speaking such as calling a human "a cluster of related properties" or  just "some consciousness," nor will you get anywhere by silly slogans or stupid characterizations such as calling your day-to-day experience a hallucination or calling humans "beast machines."  

Thursday, January 23, 2025

Folly of the "Train Them Then Dissect Them" Neuroscientists

"Train Them Then Dissect Them" is a phrase we can use to describe a particular type of animal experiment often done by neuroscientists. The experiment will work like this:

(1) Some animals (typically mice) will be trained to learn something. For example, they may be trained with the Morris Water Maze test to be able to go to a submerged platform within a water-filled tank after they are placed in such a tank. Or they may be trained over several days to keep their balance on a rotating rod, using something called a rotarod. 

(2) The animals will then be killed, and their bodies dissected, with the brain cut up into slices that can be microscopically examined. 

(3) The experimenters will look for some tiny area in the brain where they can claim to see some difference between the brains of the trained animals and a control group of animals who were not trained.  All kinds of things may be checked for.

(4) The paper will make some announcement that some tiny difference was found between the trained animals and the animals in the control group.  The reported difference might be any of 1000 different things, such as the size of dendritic spines in some tiny spot or the number of dendritic spines in some other part, or the length of synapses in some spot, or the thickness of synapses in some other part. The paper will claim that evidence has been found of "learned-induced remodeling" of the brain or "learning-induced modification of the brain." Neuroscientists will boast that they found evidence of a brain storage of memories. 

There are several reasons why these type of experiments are typically very bad examples of junk science. One reason is that you can always find hundreds of tiny little differences in two randomly chosen animals of the same species, when microscopically examining their dissected brains. So merely by showing that there is some brain difference, you do nothing at all to show that such a difference arose from the training or learning that occurred in the mice.  The same difference might have existed before the learning or training occurred. 

An example of a very poor-quality paper following this "train them then dissect them" technique is the paper "Learning-induced remodeling of inhibitory synapses in the motor cortex." Glaring defects in the paper include these:

(1) The authors failed to use adequate sample sizes, with study group sizes such as only 7 mice or only 4 mice. 

(2) The paper makes no mention of using any kind of blinding protocol, something essential for a paper of this type to be taken seriously. Neither the word "blind" nor the word "blinding" appear in the paper. The tiny differences reported in structures can easily be explained as being caused by biased ratings or biased size estimations being made  by non-blinded analysts who knew which mice were trained and which mice were not trained, and who had a motivation to estimate in a particular way, so that statistical significance could be reported. The very tiny blurry barely-visible not-very-distinct hard-to-measure things being judged for size are just the type of things where bias of  motivated and non-blinded analysts could be a big factor.  

(3) There was no pre-registration of the study committing the authors to make a small number of checks for a difference of only one specific type in only one or a few exact spots. The authors were apparently free to keep checking in a hundred different ways, until some tiny difference was found somewhere. 

(5) When the trained mice were compared to untrained mice, the control group of untrained mice was way too small, consisting of only 4 mice (Figure 2B).  15 subjects per study group (including 15 controls) is the minimum for any study like this to be taken seriously, with the required subjects probably being greater.  No mention is made of a sample size calculation, which would have revealed how inadequate the study group sizes were. 

(6) We have graphs supposedly showing some tiny difference found somewhere, but from a quick peek at the graphs you won't even notice any difference. 

Even if the paper had shown a difference much larger, it would not prove anything, because anyone microscopically examining two randomly selected brains of an animal will always be able to find little differences here and there. It is never clear or probable that such differences occurred because one set of mice got training that the others did not. No good evidence of brain-stored memories is ever produced by such studies. The people who do such junk science experiments are needlessly killing mice. 

lack of blinding protocol
Without a blinding protocol, it's a big joke

I can give a description of how an experiment of this type could be done so that it would meet at least some of the requirements of robust research. 

(1) There would be adequate study group sizes, such as maybe 30 mice in the group to be trained, and 30 mice in the control group. 

(2) The study would be pre-registered, so that there would be a commitment to gathering data and analyzing data in a specific, limited way. For example, the specification of the pre-registration document might state that exactly 25 microscope-readable slices would be taken from the same region of each mouse, such as the hippocampus or the motor cortex, and that the study would only analyze one parameter, such as the quantity of dendritic spines.   

(3) Each slice would be put in an envelope that had a subject number, and an indication of whether the mouse had been trained or not. 

(4) A simple computer programs would be written that would have two functions: (a) the ability to generate a 7-digit random number and the ability to store in a text file a supplied subject number, a "trained" indicator of either Y or N,  and that generated 7-digit number; (b) the ability to retrieve that subject number and its "trained" indicator when supplied the 7-digit random number. This is an elementary programming task.

(5) The program would be used to generate random numbers that would be written on envelopes.  Each envelope containing a slice of brain tissue and a subject number would be replaced with an envelope containing one of the random numbers generated by the program.

(6) You would now have a set of envelopes marked only with random 7-digit numbers that a human could not recognize. Such a set of envelopes could be shuffled, and given to microscope analysts. Such analysts would thereby be completely blind to whether the tissue slices belonged to mice that had been trained or mice that had not been trained. There would be no chance of some bias effect in which an analyst tended to analyze trained mice differently. The microscope-using analysts would look for differences in tissue, using only the limited hypothesis to be tested that was stated in the pre-registration document.  So, for example, if that document said that only the thickness of synapses would be analyzed, then only that one thing would be analyzed. 

(7) After the microscopic analysis had been completed, and an analysis report form put in each of the envelopes, the computer program could be used to retrieve the original subject numbers and training status corresponding to each envelope. So, for example, someone holding an envelope with a random number of 4477353 might type in that number to the computer program, and get a reply of "Subject #21, Trained" or "Subject #35, Not Trained," with the answer retrieved from the text file previously made by the program.  The answer could be written on each envelope. 

(8) Then the data could be tallied up to see whether there was any difference between the characteristics of the trained mice and the untrained mice.

(9) Since the experiment would strictly adhere to the protocol of the original pre-registration design document, there would be no chance that the final analysis would include fewer or more brain slices than specified in that document. 

That would be a decent design for an experiment of this type. No design like this is followed by the vast majority of these "train them then dissect them" experiments. Typically such experiments make no use of blinding at all. Any difference in the reported characteristics can be explained by either pure chance variation or bias of the microscopic data analyst, motivated to report some difference. 

A paper such as "Learning-induced remodeling of inhibitory synapses in the motor cortex" tries to suggest that learning of a motor skill is stored as changes in dendritic spines. There is a reason why such a hypothesis makes no sense. Dendritic spines are very unstable things. 

dendritic spine

 The 2015 paper "Impermanence of dendritic spines in live adult CA1 hippocampus" states the following, describing a 100% turnover of dendritic spines within six weeks:

"Mathematical modeling revealed that the data best matched kinetic models with a single population of spines of mean lifetime ~1–2 weeks. This implies ~100% turnover in ~2–3 times this interval, a near full erasure of the synaptic connectivity pattern."

The paper here states, "It has been shown that in the hippocampus in vivo, within a month the rate of spine turnover approaches 100% (Attardo et al., 2015; Pfeiffer et al., 2018)." The 2020 paper here states, "Only a tiny fraction of new spines (0.04% of total spines) survive the first few weeks in synaptic circuits and are stably maintained later in life."  The author here is telling us that only 1 in 2500 dendritic spines survive more than a few weeks.  Given such an assertion, we should be very skeptical about the author's insinuation that some very tiny fraction of such spines "are stably maintained." No one has ever observed a dendritic spine lasting for years, and the observations that have been made of dendritic spines give us every reason to assume that dendritic spines do not ever last for more than a few years. Conversely, human knowledge and human motor skills can last for 50 years or more, way too long a time to be explained by changes in dendritic spines or synapses, both of which change too much and too frequently to be a stable storage place for human memories. 

The failure of neuroscientists to listen to what dendritic spines are telling us is epitomized by a 2015 review article on dendritic spines, which states, "It is also known that thick spines may persist for a months [sic], while thin spines are very transient, which indicate that perhaps thick spines are more responsible for development and maintenance of long-term memory."  It is as if the writers had forgotten the fact that humans can remember very well  memories that last for 50 years, a length of time a hundred times longer than "months." 

2019 paper documents a 16-day examination of synapses, finding "the dataset contained n = 320 stable synapses, n = 163 eliminated synapses and n = 134 formed synapses."  That's about a 33% disappearance rate over a course of 16 days. The same paper refers to another paper that "reported rates of [dendritic] spine eliminations in the order of 40% over an observation period of 4 days." A paper studying the lifetimes of dendritic spines in the cortex states, "Under our experimental conditions, most spines that appear survive for at most a few days. Spines that appear and persist are rare."

The 2023 paper here gives the graph below showing the decay rate of the volume of dendritic spines. It is obvious from the graph that they do not last for years, and mostly do not even last for six months. 


Page 278 of the same paper says, "Two-photon imaging in the Gan and Svoboda labs revealed that spines can be stable over extended periods of time in vivo but also display genesis (generation) and elimination (pruning) at a frequency of 1–4% per week." Something vanishing at a rate of 2% per week will be gone within a year. Discussing the motor cortex, the paper here says, "We found that 3.5% ± 1.2% of spines were eliminated and 4.3% ± 1.3% were formed in motor cortex over 2 weeks (Figures 3J, 3K, and 3O; 224 spines, 2 animals)." An elimination rate of 3.5% over two weeks would result in 90% elimination over a length of one year. 

The 2022 paper "Stability and dynamics of dendritic spines in macaque prefrontal cortex" studied  how long  dendritic spines last in a type of monkey. It says, "We found that newly formed spines were more susceptible to elimination, with only 40% persisting over a period of months." The same study found that "the percentage of elimination for pre-existing spines over 7 days was only 6% on average," which is a rate that would cause complete disappearance of pre-existing dendritic spines within a year. Dealing with a type of monkey, the 2015 paper "In Vivo Two-Photon Imaging of Dendritic Spines in Marmoset Neocortex" tells us that "The loss or gain rate at the 1 d  [one day] interval observed in this study was similar to those in previous studies of layer 5 neurons of the somatosensory cortex of transgenic mice (12% in 3 d [ 3 days] for both loss and gain; Kim
and Nabekura, 2011) and layer 2/3 neurons of ferret V1 by the virus vector method (4% in 1 d [ 1 day] for both loss and gain; Yu et al., 2011)."  The reported loss of dendritic spines is a rate that would cause 100% loss within a year. 

Most synapses are attached to dendritic spines, so all of these findings about the instability and short lifetimes of dendritic spines are also findings about the instability and short lifetimes of synapses. Both synapses and dendritic spines are way, way too unstable to be a credible storage place for human memories that can last for decades. There is no place in the brain that can be reasonably postulated as a storage place allowing memories to persist for decades. 

dumb neuroscientist teachings

Thursday, January 16, 2025

Microscopes Will Never Find the Slightest Trace of Learned Information in a Human Brain

A recent story at the LiveScience.com site has the title "Could we ever retrieve memories from a dead person's brain?" The story has many  inaccurate claims, and fails to tell us the most important facts that are relevant to the question being considered. The subtitle of the article makes this claim: "Neuroscientists have identified the physical locations where memories are stored in the brain."  No, they have not done any such thing, and the article fails to present any good evidence that any such thing was done. 

We have this claim about a method to retrieve a memory from a brain:

"With today's technology, retrieving memories might go something like this. First, identify the set of brain cells, or neurons, that encoded a specific memory in the brain and understand how they are connected. Then, activate those neurons to create an approximate neural network, a machine learning algorithm that mimics the way the brain works."

This does not make any sense as an idea about how one would go about trying to start to read a memory from a brain. The first step in such a process would be to use microscopes to look for any speck of a trace of learned information in a brain. No one has ever succeeded in doing any such thing. Microscopic examination of brain tissue has never revealed a single word anyone ever learned. Microscopic examination of brain tissue has never revealed a single letter or character of anything anyone previously learned, and has never revealed a single pixel (an image dot) of anything anyone ever previously saw. Neural networks are a misnamed type of computer technology that do not actually mimic the brain and its physical shortfalls. 

We have quotes by a neuroscientist (Don Arnold) doing some vague hand-waving and speaking as if he knew things he does not actually know. We read,  "Memories are encoded by groups of neurons, Arnold said." That's the kind of vacuous, vague hand-waving that someone may use when he lacks any actual knowledge of how a brain could store memories.  When people make big important-sounding claims that are not well-supported by evidence, they typically speak in a kind of vague, hand-waving way.  For example, when asked where there existed the weapons of mass destruction that the US claimed were in Iraq before invading it in 2003, US defense secretary Donald Rumsfeld said this:

"We know where they are. They're in the area around Tikrit and Baghdad and east, west, south and north somewhat."

Such weapons of mass destruction were never found in Iraq in 2003 or in the next twenty years. 

Reminding me of the Rumsfeld statement, the LiveScience article claims that long-term memories are formed in the hippocampus, a claim not backed up any robust evidence. The main research paper on the hippocampus and memory is the paper "Memory Outcome after Selective Amygdalohippocampectomy: A Study in 140 Patients with Temporal Lobe Epilepsy." That paper gives memory scores for 140 patients who almost all had the hippocampus removed to stop seizures.  Using the term "en bloc" which means "in its entirety" and the term "resected" which means "cut out," the paper states, "The hippocampus and the parahippocampal gyrus were usually resected en bloc."  The "Memory Outcome after Selective Amygdalohippocampectomy" paper does not use the word "amnesia" to describe the results. That paper gives memory scores that merely show only a modest decline in memory performance.  The paper states, "Nonverbal memory performance is slightly impaired preoperatively in both groups, with no apparent worsening attributable to surgery."  In fact, Table 3 of the paper informs us that a lack of any significant change in memory performance after removal of the hippocampus was far more common than a decline in memory performance, and that a substantial number of the patients improved their memory performance after their hippocampus was removed. 

The LiveScience article tells us, "Other parts of the brain store different aspects of a memory, like emotions and other sensory details, according to the Cleveland Clinic." We are referred to a page on some web site of the Cleveland Clinic that has no named author, and is the kind of article that you might get if you asked ChatGPT about how memory works. That Cleveland Clinic page does not actually make the claim that the LiveScience article attributes to it. The Cleveland Clinic does not claim that different aspects of a memory are stored in different places, but merely claims that other parts of a brain "participate in memory processes."

The LiveScience article then makes a false claim, repeating a groundless achievement legend. It states this:

"Neuroscientists have identified engrams in the hippocampuses of mouse brains. For instance, in a 2012 study published in the journal Nature, researchers found the specific brain cells associated with a memory of an experience that induced fear." 

No, there does any exist any robust evidence for engrams (neural storage places of memories) in any animal.  The paper the LiveScience article links to is the 2012 paper “Optogenetic stimulation of a hippocampal engram activates fear memory recall.” That is a very low-quality paper guilty of several bad examples of Questionable Research Practices. We see in Figure 3 of that paper that inadequate sample sizes were used. The number of animals listed in that figure (during different parts of the experiments) are 12, 12, 12, 5, and 6, for an average of 9.4. That is not anything like what would be needed for a moderately convincing result, which would be a minimum of 15 or 20 animals for each study group, and probably more. The experiment relied crucially on judgments of fear produced by manual assessments of freezing behavior, which were not corroborated by any other technique such as heart-rate measurement. All mouse research papers relying on "freezing behavior" judgments are junk-science papers, for reasons I discuss in my post here, "All Papers Relying on Rodent 'Freezing Behavior' Estimations Are Junk Science."  The 2012 study does not describe in detail any effective blinding protocol, which is another bad defect.  The study involved stimulating certain cells in the brains of mice, with something called optogenetic stimulation. The authors have assumed that when mice freeze after stimulation, that this is a sign that they are recalling some fear memory stored in the part of the brain being stimulated. What the authors neglect to tell us is that stimulation of quite a few regions of a rodent brain will produce freezing behavior. So there is actually no reason for assuming that a fear memory was being recalled when the stimulation occurs. 

There does not exist any observational or experimental support for the existence of engrams (memory storage places) in any animal. Some papers have claimed to have produced such evidence, but their claims do not stand up to critical scrutiny. Papers claiming to produce such evidence are generally guilty of multiple types of Questionable Research Practices such as way-too-small study group sizes, lack of a blinding protocol, and the use of one or more unreliable techniques for judging memory performance. 

The LiveScience article then gives us this bit of excuse-making for why no one has ever read a memory from a brain"The retrieval of a dead person’s memories is further complicated because the discrete parts of a memory are dispersed throughout the brain; for instance sensory details that can also be stored in the parietal lobe and sensory cortex." This is an appeal to the theory that a single memory is stored not in one tiny part of the brain but in multiple scattered parts of the brain. There is no evidence for such a theory, and the theory makes things worse for the person claiming that the brain stores memories, for reasons I discuss in my post "Why the 'A Memory Is Stored Throughout the Brain' Idea Makes Things Much Worse."  If a single memory were to be stored in multiple locations in the brain, then finding all of those locations and assembling them instantly (in a brain without any addresses or indexes or sorting) would be something even more impossible to explain than imagining that the memory existed in a single spot that was instantly found. 

Giving us its second example of claiming a cited source said something that it did not actually say, the LiveScience article states, "Neurons within a given engram are connected through synapses, the spaces between neurons where electrochemical signals travel, according to the National Library of Medicine."  The page that it links to does not ever use the word "engram," and does not refer to either memory or learning. The LiveScience article claims that according to the neuroscientist Arnold, "there is evidence that memories move to different locations as they are consolidated in the brain." There is no robust evidence of any such thing. Neuroscientists have no credible evidence of memories being stored in any part of the brain of any organism, and they do not have any decent evidence of a memory moving around from one part of the brain to another. 

Arnold is quoted as saying, "You get this sort of cascade of neurons that encode these different things, and each one of them is connected in this engram." That is hand-waving. Scientists lack any robust evidence of any such thing as an engram or an encoding of learned information or experiences in the brain. No scientist has a credible detailed theory of how such encoding could occur.  An ocean of difficulties arises when you start to consider the endless problems that would arise when trying to translate human learned knowledge and experiences into brain states through any imaginable system of encoding. Part of the problem is the extreme variety of things that people can learn and experience (concepts, facts, theories, visual  experiences, auditory experiences, smell experiences, taste experiences, pain experiences, touch experiences, and emotional reactions), meaning there could be no simple encoding scheme (something as simple as the genetic code) that could handle even a tenth of all the types of memories people can form. 

We have no discussion of some of the chief facts relevant to the topic discussed. Some of these facts are below:

(1) Human brain tissue has already been exhaustively studied at very high microscopic resolutions. My post "They Stored and Studied Thousands of Brains, But Still Failed to Show Brains Store Memories" discusses how places such as the Lieber Institute have microscopically studied thousands of brains, most of which were preserved very soon after death. The same post describes how Denmark's University of Odense has stored more than 9000 brains, microscopically examining a large fraction of them. Very much healthy brain tissue just-extracted from living patients has been microscopically examined, because normal brain tissue is often extracted from epilepsy patients when operations are done to prevent intractable seizures resistant to medicine. 
(2) Despite all of that microscopic examination, no one has ever found the slightest trace of any learned information by microscopically examining a brain. Microscopic examination of brain tissue has never revealed a single letter or character of anything anyone learned, and has never revealed a single pixel of anything anyone ever saw. It isn't just that no one ever found anything like "The US has 50 states" by microscopically examining brain tissue; it's that no ever found a U or an S from microscopically examining brain tissue. 
(3) The discovery of a bit of learned information from microscopically examining brain tissue is something that would be many times easier to do than the "recreation of a full memory" imagined by the LiveScience article, but neither of these things has occurred. 
(4) Modern microscopic techniques are powerful enough to discover traces of learned information in the brain if they existed, but no such discovery has occurred. 
(5) Nothing in the brain looks like any type of mechanism for storing learned information or storing memories.  Nothing in the brain looks like any type of apparatus for writing learned  information. 
(6) Nothing in the brain looks like any type of mechanism for reading a stored memory. We can imagine how some organism's brain might physically look like something capable of reading information from a particular spot, by means of something like a moving cursor or moving reading component. The brain has no such thing, and the brain has no moving anatomical parts. 
(7) The places claimed to be sites of brain memory storage (synapses) are unstable places of high molecular turnover, where all the proteins last for only a few weeks or less. There is no credible theory of how places so unstable could be storage places for memories that can reliably last for 60 years. 
(8) There is nothing in the brain that looks like learned information stored according to some systematic format that humans understand or do not understand. Even when scientists cannot figure out a code used to store information, they often can detect hallmarks of encoded information. For example, long before Europeans were able to decipher how hieroglyphics worked, they were able to see a repetition of symbolic tokens that persuaded them that some type of coding system was being used. Nothing like that can be seen in the brain. We see zero signs that synapses or dendritic spines are any such things as encoded information. 
(9) Many humans can remember with perfect accuracy very long bodies of text, but synapses in the brain do not reliably transmit information. An individual chemical synapse transmits an action potential with a reliability of only 50% or less, as little as 10%. A recall of long bodies of text would require a traversal of very many chemical synapses. A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."

Below is a diagram from the paper "Materials Advances Through Aberration-Corrected Electron Microscopy." We see that since the time the genetic code was discovered about 1953, microscopes have grown very many times more powerful. The A on the left stands for an angstrom, a tenth of a nanometer (that is, a ten-billionth of a meter). 


Currently the most powerful microscopes can see things about 1 angstrom in width, which is a tenth of a nanometer. How does this compare to the sizes of the smallest units in brains? Those sizes are below:

Width of a neuron body (soma): about 100 microns (micrometers), which is about 1,000,000 angstroms.

Width of a synapse: about 500 nanometers, about 5000 angstroms.  When you search for "width of a synapse," you will commonly get a figure of 20 to 40 nanometers, but that is the width of the synaptic cleft, the gap between two synapses or between a synapse and a dendrite. The full head is much wider, as you can see from the page here.

Width of a dendritic spine: about 50 to 500 nanometers, about 500 to 5000 angstroms.

Length of a dendritic spine: the site here says, "the thin spine neck, which connects the spine to the main dendritic branch, has lengths between 0.04 and 1 μm [microns] and has 'door knob'-shaped head structures with diameters that between 0.5 and 2 μm [microns]." That length dimension is between 400 and 10,000 angstroms; and that head diameter is between 500 and 20,000 angstroms. 

The visual below (from the page here) shows an electron microscope image of a synapse. The width of the synaptic head is more than 500 nanometers (nm). We see nothing that looks like any kind of storage of human learned information. The neurotransmitters inside the spherical vesicles are short-lived chemicals that don't even last a week. 

synapse photograph

Below we see a closeup electron microscope photograph of some dendritic spines which are 500 nanometers (5000 angstrom) wide, from the scientific paper here "Ultrastructural comparison of dendritic spine morphology preserved with cryo and chemical fixation," by Tamada et. al.  We see nothing that looks anything like stored learned information or stored memory information. Do a Google search for "diagram of dendritic spine head" and you will get diagrams that look like nothing that could be any system for storing information long-term. The diagrams will show that such dendritic spine heads are just bags of short-lived  proteins and chemicals. A few of these diagrams may show actin filaments looking a bit like a structure, but searching for "lifetime of actin filaments" you will be told that such filaments have lifetimes of only minutes. 

dendritic spine closeup

The only thing in the brain smaller than the structures shown above are protein molecules. But we know a reason why protein molecules cannot be a storage place for memories that can last for 50 years. The reason is that the average lifetime of a brain protein molecule is 1000 times shorter than the longest length that people can remember things.  Proteins in the brain have an average lifetime of two weeks or shorter. 

A scientific paper states this:

"Experience-dependent behavioral memories can last a lifetime, whereas even a long-lived protein or mRNA molecule has a half-life of around 24 hrs. Thus, the constituent molecules that subserve the maintenance of a memory will have completely turned over, i.e. have been broken down and resynthesized, over the course of about 1 week."

Research on the lifetime of synapse proteins is found in the June 2018 paper “Local and global influences on protein turnover in neurons and glia.” The paper starts out by noting that one earlier 2010 study found that the average half-life of brain proteins was about 9 days, and that a 2013 study found that the average half-life of brain proteins was about 5 days. The study then notes in Figure 3 that the average half-life of a synapse protein is only about 5 days, and that all of the main types of brain proteins (such as nucleus, mitochondrion, etc.) have half-lives of 15 days or less.  The 2018 study here precisely measured the lifetimes of more than 3000 brain proteins from all over the brain, and found not a single one with a lifetime of more than 75 days (figure 2 shows the average protein lifetime was only 11 days). 

The paper here states, "Experiments indicate in absence of activity average life times ranging from minutes for immature synapses to two months for mature ones with large weights."

Clearly the resolution of the most powerful microscopes is powerful enough to read memories stored in neurons or synapses or dendritic spines, if such memories existed. And more than 10,000 brains have been microscopically studied in recent years. The failure to microscopically read any  memories from human brain tissue is a major reason for thinking that brains do not store human memories. 

If memories were stored in the brain, roughly about the year 1960 humans would have been able to read learned information stored in brains, when the resolution of microscopes reached about 10 angstroms. If memories were stored in the brain, we would have discovered irrefutable evidence of such a thing about 60 years ago. The total failure to find a single speck of learned information in the brain by microscopic examination is one of the strongest reasons for disbelieving in a brain storage of memories. 

I predict with great confidence that microscopes will never find the slightest trace of learned information in the human brain, because memories are not stored in brains. But we may have in the future some occasional "false alarm" claims to have accomplished such a thing, claims that are not supported by robust evidence. Neuroscientists have often been guilty of both smoke-and-mirrors trickery and pareidolia, when someone claims to see something that isn't really there, typically because he is eagerly scanning large bodies of random, ambiguous data, eagerly hoping to find something that isn't really there. So we may see some pareidolia in which a neuroscientist claims to see a memory by microscopic examination. Such a thing will be like some fervent believer in animal ghosts in the clouds examining thousands of photos of clouds, and claiming that this one or that looks like the shape of an animal. 

evidence-ignoring neuroscientist

Note that in the LiveScience article there is not any mention of any scientist sounding hopeful about a possibility of discovering stored information in the brain. 

This week I was reminded of the ability of the human mind to retain memories for 50 years, contrary to what we would expect from the high molecular turnover in brains.  I had a recollection which proved the ability of the mind to recall very old memories that have not been recalled in half a century. For some reason I recalled a book I had read about 50 years ago, and never since: the science fiction book "Galaxies Like Grains of Sand" by Brian Aldiss. I remembered some lines from the book. I wrote them down on paper like this:

"The mirror of the past lies shattered. The fragments you hold in your hand."

After I wrote this recollection of something I had not read, thought of or heard quoted in fifty years, I borrowed the book on www.archive.org.  I see that the lines were these (almost exactly as I remembered them)

"The long mirror of the past is shattered...Only a few fragments are left, and these you hold in your hand." 

For a person like me, the mirror of the past is not shattered, but remains well preserved after 60 years, contrary to what we would expect if memories were stored in brains with such high molecular turnover and such constant remodeling of synapses and dendritic spines. 

Below is a quote on the same topic from an earlier post discussing why brains cannot be the storage place of very old memories:

"I know for a fact that memories can persist for 50 years, without rehearsal. Recently I was trying to recall all kinds of details from my childhood, and recalled the names of persons I hadn't thought about for decades, as well as a Christmas incident I hadn't thought of for 50 years (I confirmed my recollection by asking my older brother about it). ...Upon looking through a list of old children shows from the 1960's, I saw the title “Lippy the Lion and Hardy Har Har,” which ran from 1962 to 1963 (and was not syndicated in repeats, to the best of my knowledge). I then immediately sung part of the melody of the very catchy theme song, which I hadn't heard in 53 years. I then looked up a clip on a youtube.com, and verified that my recall was exactly correct. I also recently recalled 'The Patty Duke Show' from the 1960's, a show I haven't seen in 50 years, and recalled that in the opening title sequence we saw Patty walking down some stairs. I looked up the title sequence on www.youtube.com, and verified that my 50-year-old memory was correct. This proves that a 53-year-old memory can be instantly recalled."

The prediction I make here is just one of several predictions in my 2019 post "Contrarian Predictions Regarding Biology, the Brain and Technology."  So far my predictions in that post are holding up very well.