Tuesday, February 4, 2020

When Animals Cast Doubt on Dogmas About Brains

These days our science news sources typically try to get us all excited about many a science study that is not worthy of our attention.  But when a study appears that tells us something important, such a study will receive almost no attention if the study reports something that conflicts with prevailing dogmas about reality.  So some recent not-much-there study involving zapping dead brain tissue got lots of attention in the press, but a far more important neuroscience study received almost no attention.  The more important study was one that showed a rat with almost no brain had normal cognitive and memory capabilities. 

The study was reported in the Scientific Reports sub-journal of the very prestigious journal Nature, and had the title "Life without a brain: Neuroradiological and behavioral evidence of neuroplasticity necessary to sustain brain function in the face of severe hydrocephalus."  The study examined a rat named R222 that had lost almost all of its brain because of a disease caused hydrocephalus, which replaces brain tissue with a watery fluid. The study found that despite the rat having lost almost all of its brain, "Indices of spatial memory and learning across the reported Barnes maze parameters (A) show that R222 (as indicated by the red arrow in the figures) was within the normal range of behavior, compared to the age matched cohort."   In other words, the rat with almost no brain seemed to learn and remember as well as a rat with a full brain. 

This result should not come as any surprise to anyone familiar with the research of the physician John Lorber. Lorber studied many human patients with hydrocephalus, in which healthy brain tissue is gradually replaced by a watery fluid. Lorber's research is described in this interesting scientific paper. A mathematics student with an IQ of 130 and a verbal IQ of 140 was found to have “virtually no brain.” His vision was apparently perfect except for a refraction error, even though he had no visual cortex (the part of the brain involved in sight perception).

In the paper we are told that of about 16 patients Lorber classified as having extreme hydrocephalus (with 90% of the area inside the cranium replaced with spinal fluid), half of them had an IQ of 100 or more. The article mentions 16 patients, but the number with extreme hydrocephalus was actually 60, as this article states, using information from this original source that mentions about 10 percent of a group of 600. So the actual number of these people with tiny brains and above-average intelligence was about 30. The paper states:

"[Lorber] described a woman with an extreme degree of hydrocephalus showing 'virtually no cerebral mantle' who had an IQ of 118, a girl aged 5 who had an IQ of 123 despite extreme hydrocephalus, a 7-year-old boy with gross hydrocephalus and an IQ of 128, another young adult with gross hydrocephalus and a verbal IQ of 144, and a nurse and an English teacher who both led normal lives despite gross hydrocephalus."

Sadly, the authors of the "Life without a brain" paper seemed to have learned too little from the important observational facts they recorded. Referring to part of the brain, they claim that "the hippocampus is needed for memory," even though their rat R222 had no hippocampus that could be detected in a brain scan. They stated, "It was not possible from these images of R222 to identify the caudate/putamen, amygdala, or hippocampus."  Not very convincingly, the authors claimed that rat R222 had a kind of flattened hippocampus, based on some chemical signs (which is rather like guessing that some flattened roadkill was a particular type of animal). 

But how could this rat with almost no brain have performed normally on the memory and cognitive tests?  The authors appeal to a miracle, saying, "This rare case can be viewed as one of nature’s miracles." If you believe that brains are what store memories and cause thinking, you must regard cases such as this (and Lorber's cases) as "miracles," but when a scientist needs to appeal to such a thing, it is a gigantic red flag. Much better if we have a theory of the mind under which such results are what we would expect rather than a miracle.  To get such a theory, we must abandon the unproven and very discredited idea that brains store memories and that brains create minds. 

The Neuroskeptic blogger at Discovery magazine's online site mentions this rat R222, and the case of humans who performed well despite having the vast majority of their brain lost due to disease.  Let's give him credit for mentioning the latter. But we shouldn't applaud his use of a  trick that skeptics constantly employ: always ask for something you think you don't have. 

This "keep moving the goalposts" trick works rather like this. If someone shows a photo looking like a ghost on the edge of a photo, say that it doesn't matter because the ghost isn't in the middle of the photo. If someone then shows you a photo that appears to show a ghost in the middle of the photo, say that it doesn't matter, because the photo isn't a 6-megabyte high resolution photo.  If someone then shows you a 6-megabyte high resolution photo that appears to show a ghost in the middle of the photo, say that it doesn't matter, because it's just a photo and not a movie. If someone then shows you a movie of what looks like a ghost, say that it doesn't matter, because there were not multiple witnesses of the movie being made. If someone then shows you a movie of what looks like a ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the movie isn't a movie-theater-quality 35 millimeter Technicolor Panavision movie.  If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the ghost wasn't levitating. If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a levitating ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter because the ghost wasn't talking.  If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a levitating talking ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the levitating talking ghost didn't explain the meaning of life to your satisfaction. 

The Neuroskeptic uses such a technique when he writes the following:

"In the case of the famous human cases of hydrocephalus, the only evidence we have are the brain scans showing massively abnormal brain anatomy. There has never, to my knowledge, been a detailed post-mortem study of a human case."

If there were such an alleged "shortfall," it would be irrelevant, because we can tell perfectly well from a brain scan the degree of brain tissue loss when someone has lost most of their brain, as happened in the case of Lorber's patients and other hydrocephalus patients.  Complaining about the lack of an autopsy study in such patients is like saying that you don't know that your wife lacks a penis, because no one did an autopsy study on her.  Neuroskeptic's claim of no autopsy studies on hydocephalus patients is incorrect. When I do a Google search for "autopsy of hydrocephalus patient," I quickly find several studies which did such a thing, such as this one which reports that one of 10 patients with massive brain loss due to hydrocephalus was "cognitively unimpaired."  Why did our Neuroskeptic blogger insinuate that such autopsy studies do not exist, when discovering their existence is as easy as checking the weather? 

There are many animal studies (such as those of Karl Lashley) that conflict with prevailing dogmas about the brain. One such dogma is that the cerebral cortex is necessary for mental function.  But some scientists once tried removing the cerebral cortex of newly born cats. The abstract of their paper reports no harmful results:

"The cats ate, drank and groomed themselves adequately. Adequate maternal and female sexual behaviour was observed. They utilized the visual and haptic senses with respect to external space. Two cats were trained to perform visual discrimination in a T-maze. The adequacy of the behaviour of these cats is compared to that of animals with similar lesions made at maturity."

Figure 4 of the full paper clearly shows that one of the cats without a cerebral cortex learned well in the T-maze test, improving from a score of 50 to almost 100. Karl Lashley did innumerable experiments after removing parts of animal's brains. He found that you could typically remove very large parts of an animal's brain without affecting the animal's performance on tests of learning and memory. 

Ignoring such observational realities, our neuroscientists cling to their dogmas, such as the dogma that memories are stored in brains,  and that the brain is the source of human thinking.  Another example of a dubious neuroscience dogma is the claim that the brain uses coding for communication.   A scientific paper discusses how common this claim is:

"A pervasive paradigm in neuroscience is the concept of neural coding (deCharms and Zador 2000): the query 'neural coding' on Google Scholar retrieved about 15,000 papers in the last 10 years. Neural coding is a communication metaphor. An example is the Morse code (Fig. 1A), which was used to transmit texts over telegraph lines: each letter is mapped to a binary sequence (dots and dashes)."

But the idea that the brain uses something like a Morse code to communicate has no real basis in fact.  The paper quoted above almost confesses this by stating the following:

"Technically, it is found that the activity of many neurons varies with stimulus parameter, but also with sensory, behavioral, and cognitive context; neurons are also active in the absence of any particular stimulus. A tight correspondence between stimulus property and neural activity only exists within a highly constrained experimental situation. Thus, neural codes have much less representational power than generally claimed or implied."

That's moving in the right direction, but it would be more forthright and accurate to say that there is zero real evidence that neurons are using any type of code to represent human learned information, and that the whole idea of "neural coding" is just a big piece of wishful thinking where scientists are seeing what they hope to see, like someone looking at a cloud and saying, "That looks like my mother."



Saturday, January 18, 2020

"Particle Experiences" and Other Dubious Ideas of Panpsychism

The book Galileo's Error: Foundations for a New Science of Consciousness by philosopher Philip Goff is a book with quite a few misfires. The biggest one is an extremely common one among today's philosophers. The error is to use the way-too-small term “problem of consciousness” in discussing current shortfalls in explaining the human mind.

What we actually have is an extremely large “problem of explaining human mental capabilities and human mental experiences” that is vastly larger than merely explaining consciousness. The problem includes all the following difficulties and many others:

  1. the problem of explaining how humans are able to have abstract ideas;
  2. the problem of explaining how humans are able to store learned information, despite the lack of any detailed theory as to how learned knowledge could ever be translated into neural states or synapse states;
  3. the problem of explaining how humans are able to reliably remember things for more than 50 years, despite extremely rapid protein turnover in synapses, which should prevent brain-based storage of memories for any period of time longer than a few weeks;
  4. the problem of how humans are able to instantly retrieve little accessed information, despite the lack of anything like an addressing system or an indexing system in the brain;
  5. the problem of how humans are able to produce great works of creativity and imagination;
  6. the problem of how humans are able to be conscious at all;
  7. the problem of why humans have such a large variety of paranormal psychic experiences and capabilities such as ESP capabilities that have been well-established by laboratory tests, and near-death experiences that are very common, often occurring when brain activity has shut down;
  8. the problem of how humans have such diverse skills and experiences as mathematical reasoning, moral insight, philosophical reasoning, and refined emotional and spiritual experiences;
  9. the problem of self-hood and personal identity, why it is that we always continue to have the experience of being the same person, rather than just experiencing a bundle of miscellaneous sensations;
  10. the problem of intention and will, how is it that a mind can will particular physical outcomes.

It is therefore a ridiculous oversimplification for philosophers to be raising a mere "problem of consciousness” that refers to only one of these problems, and to be speaking as if such a “problem of consciousness” is the only difficulty that needs to be tackled by a philosophy of mind. But that is exactly what Philip Goff does in his book. We have an indication of his failure to pay attention to the problems he should be addressing by the fact that (according to his index) he refers to memory on only two pages of his book, both of which say nothing of substance about human memory or the problems of explaining it. His index also contains no mention of insight, imagination, ideas, will, volition or abstract ideas. The book's sole mention of the problem of self-hood or the self is (according to the index) a single page referring to “self, as illusion.” The book's sole reference to paranormal phenomena is a non-substantive reference on a single page. Ignoring the vast evidence for psi abilities, near-death experiences and other paranormal phenomena (supremely relevant to the philosophy of mind) is one of the greatest errors of academic philosophers of the past fifty years.

Imagine a baseball manager who has a “philosophy of winning baseball games” that is simply “make contact with the ball.” If you had such a philosophy, you would be paying attention to only a very small fraction of what you need to be paying attention to in order to win baseball games. And any philosopher hoping to advance a credible philosophy of mind has to pay attention to problems vastly more varied than a mere “problem of consciousness” or problem of why some beings are aware.

Goff's philosophical approach is to try and sell the old idea of panpsychism. Around for a very long time, panpsychism is the idea that consciousness is in everything or that consciousness is an intrinsic property of matter. A panpsychist may argue that just as mass is an intrinsic property of matter, consciousness is an intrinsic property of matter.  

As shown by psychology textbooks that may run to 500 pages, the human mind (including memory) is an incredibly diverse and complicated thing, consisting of a huge number of capabilities and aspects. It has always been quite an error when people try to describe so complicated a thing as something simple and one-dimensional.  This is what panpsychists have always done when they try to reduce the mind to the word "consciousness," which they then describe as a "property." A property is a simple aspect of something that can be described by a single number (for example, weight is a property of matter, and length is a property of matter, both of which can be stated as a single number).  A mind is something vastly more complicated than a property.  

Goff commits this same simplistic error by trying to shrink the human mind to the word "consciousness" throughout his book, and then telling us on page 23 that consciousness is a "feature of the physical world," and telling us on page 113 that "consciousness is a fundamental and ubiquitous feature of physical reality." When I look up "feature," I find that it is defined to mean the same thing as "property": "a distinctive attribue or aspect of something."  Human minds are vastly more complicated than any mere "feature" or "property" or "aspect" or "attribute."  We are being fed simplistic pablum when we are told that our minds are some "feature" or "aspect" or "property." If you've started out with the vast diversity and extremely multifaceted richness of the human mind, and somehow ended with up a one-dimensional word such as "feature" or "aspect" or "property,"  you've gone seriously wrong somewhere. Call it a shrinkage snafu. 

So many professors act like masters of concealment by acting in so many ways to misrepresent the gigantic mental and biological complexity of human beings, as if they were so interested in covering up our complexities.   And so we always have utterly misleading cell diagrams included in our biology textbooks, which make it look like there are only a few organelles per cell (the paper here tells us that there are typically hundreds or thousands of organelles per cell). And so we have "cell types" diagrams, which make it look as if there are only a few types of cells (the human body actually has hundreds of types of cells). And so we have the false myth that DNA is a blueprint or a recipe for making humans,  false not only because of the lack of any such human specification in DNA, but also because of the naive error of speaking as if you could ever build an ever-changing supremely dynamic organism like a human (as internally dynamic as a very busy factory) through some mere recipe or mere blueprint like you would use to construct a static house or a static piece of food.  And so we have the complexity-concealing claim that the vastly organized systemic arrangements of the human body can be explained by the "stuff piles up" idea of the accumulation of mutations (as if something as complex as a city could be explained by something like what we use to explain snow drifts). And so we have the frequent reality-denying assertions that mentally humans are "just another primate" or that other mammals are "just like us." And so you have the great complexity concealment of speaking as if a human mind was mere awareness or consciousness that could be described as a "property" or "feature." 

Panpsychism creates the problem that we have to then end up believing that all kinds of inanimate things are conscious to some degree. If consciousness were to be some intrinsic property of matter, it would seem to follow that the more matter, the greater the consciousness. So we would have to believe that the large rocks in Central Park of New York City are far more conscious than we are. And we would also have to believe that the Moon is vastly more conscious than we are. But if such inanimate things are far more conscious than we are, why do they not give us the slightest indication that they are conscious? There is no sign of any intelligent motion in the comets or asteroids that travel through space. Instead they seem to operate according to purely physical principles, just exactly as if they had no consciousness whatsoever. That's why astronomers can predict very exactly how closely an asteroid will pass by our planet, and the exact day that it will pass by our planet. So it seems that Goff's claim on page 116 that panpsychism is “entirely consistent with the facts of empirical science” is not actually true. To the contrary, we see zero signs of any consciousness or will in any non-biological thing, no matter how great its size, contrary to what we would expect under the theory of panpsychism.


No sign of any Mind here (credit:NASA)

On page 113 Goff suggests that maybe it is just certain arrangements of matter that might be conscious.  Goff isn't being terribly clear when he tells us on page on page 113, "Most panpsychists will deny that your socks are conscious, while asserting that they are ultimately composed of things that are conscious." So what does that mean, that the threads of your socks are conscious? If a panpsychist tries to defend his beliefs by denying that all material things are conscious, this actually pulls the legs from out under the table of panpsychism, depriving it of any small explanatory value it might have.  Once you go from "all matter is conscious" to "only certain arrangements of matter," you still have the same problem in materialism, that there is no reason anyone can see why consciousness would appear from some particular arrangement of matter. 

It would seem that the panpsychist has a kind of dilemma: either maintain that consciousness is an intrinisc property of matter (leaving you perhaps with some very small explanatory power, but many absurd consequences such as large rocks being more conscious than humans), or maintain that only special arrangements of matter are conscious (which would seem to remove any explanatory reason for believing in panpsychism in the first place). 

On page 150 to 153 Goff shows himself to be an uncritical consumer of one of the biggest legends of neuroscience, that split-brain patients have a dual consciousness. They have no such thing, as we can discover by watching Youtube interviews with split-brain patients who clearly have a single self. A scientific study published in 2017 set the record straight on split-brain patients. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.” The actual facts about split-brain surgery are related here by a surgeon who has performed such an operation. He states this about split-brain patients:

"After the surgery they are unaffected in everyday life, except for the diminished seizures. They are one person after the surgery, as they were before."

Panpsychism does very little to help with the explanatory problems in the philosophy of mind. The main reason is that it does not help with more than one of the ten problems listed at the beginning of this post. For example, panpsychism is worthless in explaining how humans are able to instantly retrieve memories, or why humans are able to form abstract ideas.

In the last paragraph of the book, Goff makes a pitch that kind of follows that classic salesman's advice to “sell the sizzle not the steak.” He states the following (imagine some violins playing as you read this passage):

Panpsychism offers a way of 're-enchanting the universe.' On the panpsychist view, the universe is like us; we belong in it. We need not live exclusively in the human realm, ever more diluted by globalization and consumerist capitalism. We can live in nature, in the universe. We can let go of nation and tribe, happy in the knowledge that there is a universe that welcomes us.”

But I fail to see any reason why a belief in panpsychism would produce any good change in human behavior. I can also imagine it having a bad effect. If you believe that all matter is conscious, you might have no particular guilt about killing someone. You might think to yourself: “He will still be conscious, even if I kill him, because all matter is conscious.” Similarly, if you believe that all matter is conscious, you might think it would be no great tragedy if all humanity were to become extinct, on the grounds that this would produce only a slight reduction in the total consciousness that exists in the universe (humanity having less than .0000000000000000000000000000000000001 of the universe's matter).

When panpsychists use simplistic shrinkage to describe mind as a mere "property" or "feature," it is like someone telling you that New York City is just a geographical coordinate, or like someone telling you that Brazil is just a pair of sounds someone can make with his mouth. 

Scientific American has an interview with Goff about his book. Goff states the following;

"The basic commitment is that the fundamental constituents of reality—perhaps electrons and quarks—have incredibly simple forms of experience. And the very complex experience of the human or animal brain is somehow derived from the experience of the brain’s most basic parts."

We can try to imagine such a whimsical possibility. A quark might have an experience of a dull, static existence stuck inside an atomic nucleus. An electron might have an experience of constantly whizzing around a nucleus at incredible speeds, like some person stuck on an amusement park ride. Or a neuron might have an experience of just sitting there motionless inside a brain.  If there were billions or trillions or quadrillions of such tiny micro-experiences, they would never add up to anything like the experience of being a mobile thinking human free to walk around anywhere he wishes. 

Saturday, December 7, 2019

The Guy with the Smallest Brain Had the Highest IQ

According to the theory that your brain creates your mind and stores your memories, we should expect that removal of half of the brain should have the most drastic effect on memory and intelligenc. But at the link here and the link here you can read about many cases showing a good preservation of memory and intelligence even after half a brain was removed to treat epileptic seizures. 

There is a new study relating to the topic of intelligence and removal of half of the brain.  Once again, the study reports facts shockingly inconsistent with standard claims that the brain is the source of the human mind. But the press reporting on this study is feeding us a kind of "cover story" trying to explain away the shocking result.  Upon close inspection, this "cover story" falls apart. 

The study involved brain scans of six patients who had half of their brains removed.  Table S3 of the supplemental information of the study reveals that the intelligence quotients (IQ scores) of the six subjects were 84, 95, 91, 99,  96 and 80. So most of the six were fairly smart, even though half of their brains were gone.  How could this be when half of their brains were missing? 

In stories such as the story in Discover magazine, it is suggested that "brain rewiring" can explain such a thing. The story states the following:

"In a study published Tuesday in Cell Reports, scientists studied six of these patients to see how the human brain rewires itself to adapt after major surgery. After performing brain scans on the patients, the researchers found that the remaining hemisphere formed even stronger connections between different brain networks — regions that control things like walking, talking and memory —  than in healthy control subjects. And the researchers suggest that these connections enable the brain, essentially, to function as if it were still whole."

The summary above is not accurate, as it tells a story that is not true for one of the six patients, as I will explain below. This hard-to-swallow story (repeated by the New York Times) is reassuring if you wish to keep believing that the brain is the source of your mind.  The person who buys such a story can reassure himself kind of like this:

"How do people stay smart when you take out half of their brain? It's simple: the brain just rewires itself so that the half works as good as a whole. It acts kind of like a computer that reprograms itself to keep functioning like normal when you yank out half of its components."

We know of no machines ever built that have such a capability.  All brains engage in some "brain rewiring" every year, so any mental effect can always be attributed to "brain rewiring." We cannot dream of how a brain could possibly be clever enough to rewire itself to perform just as well when half of its matter was removed.   When we take a close look at the data in the study, it shows that this "brain rewiring" story does not hold up for the smartest subject in the study. 

In Table S4 of the study, we have measurements based on brain scanning, designed to show the level of connectivity in the brains of the six subjects.  Some of the six subjects have a slightly higher average connectivity score, but it's not very much higher.  The average connectivy scores for the controls with normal brains were .30 and .35.  The average connectivity scores for the six patients with half a brain were .43, .45, .35, .30, .43, and .41.  So it was merely true that the average brain connectivity score of the patients with half a brain was slightly higher than the normal controls.  And when we look at another metric (the "max" score listed at the end of Table S4), we see that all of the half-brain subjects had lower "brain connectivity" scores than the controls.  The "max" connectivy scores for the controls with normal brains were .90 and .74, but the "max" connectivity scores for the six patients with half a brain were only .57, .67, .49, .51, .63, and .62.  So the evidence for greater brain connectivity or "nicely rewired brains" after removal of half a brain is actually quite thin. 

Interestingly, the half-brain patient with the highest intelligence (labeled as HS4, with an IQ of 99) had an average brain connectivity score of only .30, which is the same as one of the group of controls with normal brains, and less than the brain connectivity of the other group of controls with normal brains.   So the smartest person with half a brain (who had an IQ of 99) did not at all have any greater brain connectivity that can explain his normal intelligence with only half a brain.  How can this subject HS4 have had a normal intelligence with only half a brain?  In this case, favorable brain rewiring or greater brain connectivity cannot explain the result.   So the "cover story" of "their brains rewired to keep them smart" falls apart. 


half brain
The half brain of subject HS4, IQ of 99, average brain wiring

The only way we can explain such results is by postulating that the human brain is not actually the source of the human mind.  If the human brain is neither the source of the human mind nor the storage place of memories, we should not find any of the results mentioned in this post to be surprising. 

Subject HS4 is not by any means the most remarkable case of a patient with half a brain and a good mind. The study here is entitled "Development of above normal language and intelligence 21 years after left hemispherectomy."  After they removed the part of the brain claimed to be the "center of language," a subject developed "above normal" language and intelligence. 

Then there is the case of Alex who did not start speaking until the left half of his brain was removed. A scientific paper describing the case says that Alex “failed to develop speech throughout early boyhood.” He could apparently say only one word (“mumma”) before his operation to cure epilepsy seizures. But then following a hemispherectomy (also called a hemidecortication) in which half of his brain was removed at age 8.5, “and withdrawal of anticonvulsants when he was more than 9 years old, Alex suddenly began to acquire speech.” We are told, “His most recent scores on tests of receptive and expressive language place him at an age equivalent of 8–10 years,” and that by age 10 he could “converse with copious and appropriate speech, involving some fairly long words.” Astonishingly, the boy who could not speak with a full brain could speak well after half of his brain was removed. The half of the brain removed was the left half – the very half that scientists tell us is the half that has more to do with language than the right half. 

What is also interesting in the new study is that when we cross-compare Figure 1 with Table S3 (in the supplemental information) we find that the patient with the largest brain (after the hemispherectomy operation) had the lowest IQ, and that the patient with the smallest brain had the highest IQ.  In Figure 1 the brain of the subject with an IQ of 80 (subject HS6) looks much larger than the brain of the subject with an IQ of 99 (subject HS4).  Such a result is not suprising under the hypothesis that your brain is not the source of your mind.  It should also be not surprising to anyone who considers the fact that the brain of the Neanderthals (presumably not as smart as we modern humans) was substantially larger than the brain of modern humans. 

Saturday, November 9, 2019

The Lack of Evidence for Memory-Storage Engram Cells

There are some very good reasons for thinking that long-term memories cannot be stored in brains, which include:
  • the impossibility of credibly explaining how the instantaneous recall of some obscure and rarely accessed piece of information could occur as a neural effect, in a brain that is without any indexing system and subject to a variety of severe signal slowing effects;
  • the impossibility of explaining how reliable accurate recall could occur in a brain subject to many types of severe noise effects;
  • the short lifetimes of proteins in synapses, the place where scientists most often claim our memories are stored;
  • the lack of any credible theory explaining how memories could be translated into neural states;
  • the complete failure to ever find any brain cells containing any encoded information in neurons or synapses other than the genetic information in DNA;
  • the lack of any known read or write mechanism in a brain.
But scientists occasionally produce research papers trying to persuade us that memories are stored in a brain, in cells that are called "engram cells." In this post, I will discuss why such papers are not good examples of experimental science, and do not provide any real evidence that a memory was stored in a brain. I will discuss seven problems that we often see in such science papers. The "sins" I refer to are merely methodological sins rather than moral sins. 

Sin #1: assuming or acting as if a memory is stored in some exact speck-sized spot of a brain without any adequate basis for such a “shot in the dark” assumption.

Scientists never have a good basis for believing that a particular memory is stored in some exact tiny spot of the brain. But a memory experiment will often involve some assumption that a memory is stored in one exact spot of the brain (such as some exact spot of a cubic millimeter in width). For example, an experimental study may reach some conclusion (based on inadequate evidence) about a memory being stored in some exact tiny spot of the brain, and then attempt to reactivate that memory by electrically or optogenetically stimulating that exact tiny spot.

The type of reasoning that is used to justify such a “shot in the dark” assumption is invariably dubious. For example, an experiment may observe parts of a brain of an animal that is acquiring some memory, and look for some area that is “preferentially activated.” But such a technique is as unreliable as reading tea leaves. When brains are examined during learning activities, brain regions (outside of the visual cortex) do not actually show more than a half of 1% signal variation. There is never any strong signal allowing anyone to be able to say with even a 25% likelihood that some exact tiny part of the brain is where a memory is stored. If a scientist picks some tiny spot of the brain based on “preferential activation” criteria, it is very likely that he has not picked the correct location of a memory, even under the assumption that memories are stored in brains. Series of brains scans do not show that some particular tiny spot of the brain tends to repeatedly activate to a greater degree when some particular memory is recalled. 

Sin #2: Either a lack of a blinding protocol, or no detailed discussion of how an effective technique for blinding was achieved.

Randomization and blinding techniques are a very important scientific technique for avoiding experimenter bias. For example, what is called the “gold standard” in experimental drug studies is a type of study called a double-blind, randomized experiment. In such a study, both the doctors or scientific staff handing out pills and the subjects taking the pills do not know whether the pills are the medicine being tested or a placebo with no effect.

If similar randomization and blinding techniques are not used in a memory experiment, there will be a high chance of experimenter bias. For example, let's suppose a scientist looks for memory behavior effects in two groups of animals, the first being a control group having no stimulus designed to affect memory, and the second group having a stimulus designed to affect memory. If the scientist knows which group is which when analyzing the behavior of the animals, he will be more likely to judge the animal's behavior in a biased way, so that the desired result is recorded.

A memory experiment can be very carefully designed to achieve this blind randomization ideal that minimizes the chance of experimenter bias. But such a thing is usually not done in memory experiments purporting to show evidence of a brain storage of memories. Scientists working for drug trials are very good about carefully designing experiments to meet the ideal of blind randomization, because they know the FDA will review their work very carefully, rejecting the drug for approval if the best experimental techniques were not used. But neuroscientists have no such incentive for experimental rigor.

Even in studies where some mention is made of a blinding protocol, there is very rarely any discussion of how an effective protocol was achieved. When dealing with small groups of animals, it is all too easy for a blinding protocol to be ineffective and worthless. For example, let us suppose there is one group of 10 mice that have something done to their brains, and some other control group that has no such done thing. Both may be subjected to a stimulus, and their “freezing behavior” may be judged. The scientists judging such a thing may be supposedly “blind” to which experimental group is being tested. But if a scientist is able to recognize any physical characteristic of one of the mice, he may actually know which group the mouse belongs to. So it is very easy for a supposed blinding protocol to be ineffective and worthless. What is needed to have confidence in such studies is not a mere mention of a blinding protocol, but a detailed discussion of exactly how an effective blinding protocol was achieved. We almost never get such a thing in memory experiments. The minority of them that refer to a blinding protocol almost never discuss in detail how an effective blinding protocol was achieved, one that really prevented scientists from knowing something that might have biased their judgments. 

For an experiment that judges "freezing behavior" in rodents, an effective blinding protocol would be one in which such freezing was judged by a person who never previously saw the rodents being tested. Such a protocol would guarantee that there would be no recognition of whether the animals were in an experimental group or a control group. But in "memory engram" papers we never read that such a thing was done.  To achieve an effective blinding protocol, it is not enough to use automated software for judging freezing, for such software can achieve biased results if it is run by an experimenter who knows whether or not an animal was in a control group. 

Sin #3: inadequate sample sizes, and a failure to do a sample size calculation to determine how large a sample size to test with.

Under ideal practice, as part of designing an experiment a scientist is supposed to perform what is called a sample size calculation. This is a calculation that is supposed to show how many subjects to use per study group to provide adequate evidence for the hypothesis being tested. Sample size calculations are included in rigorous experiments such as experimental drug trials.

The PLOS paper here reported that only one of the 410 memory-related neuroscience papers it studied had such a calculation. The PLOS paper reported that in order to achieve a moderately convincing statistical power of .80, an experiment typically needs to have 15 animals per group; but only 12% of the experiments had that many animals per group. Referring to statistical power (a measure of how likely a result is to be real and not a false alarm), the PLOS paper states, “no correlation was observed between textual descriptions of results and power.” In plain English, that means that there's a whole lot of BS flying around when scientists describe their memory experiments, and that countless cases of very weak evidence have been described by scientists as if they were strong evidence.

The paper above seems to suggest that 15 animals per study group is needed.  But In her post “Why Most Published Neuroscience Findings Are False,” Kelly Zalocusky PhD calculates (using Ioannidis’s data) that the median effect size of neuroscience studies is about .51. She then states the following, talking about statistical power:

"To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous. To bump our power up to 0.5, we would need an n of 31 per group. A power of 0.8 would require 60 per group."

So the number of animals per study group for a moderately convincing result (one with a statistical power of .80) is more than 15 (according to one source), and something like 60, according to another source.  But the vast majority of "memory engram" papers do not even use 15 animals per study group.

Sin #4: a high occurrence of low statistical significance near the minimum of .05, along with a frequent hiding of such unimpressive results, burying them outside of the main text of a paper rather than placing them in the abstract of the paper.

Another measure of how robust a research finding is the statistical significance reported in the paper. Memory research papers often have marginal statistical significance close to .05.

Nowadays you can publish a science paper claiming a discovery if you are able to report a statistical significance of only .05. But it has been argued by 72 experts that such a standard is way too loose, and that things should be changed so that a discovery can only be claimed if a statistical significance of .005 is reached, which is a level ten times harder to achieve.

It should be noted that it is a big misconception that when you have a result with a statistical significance (or P-value) of .05, this means there is a probability of only .05 that the result was a false alarm and that the null hypothesis is true. This paper calls such an idea “the most pervasive and pernicious of the many misconceptions about the P value.” 

When memory-related scientific papers report unimpressive results having a statistical significance such as only .03, they often make it hard for people to see this unimpressive number. An example is the recent paper “Arti´Čücially Enhancing and Suppressing Hippocampus-Mediated Memories.”  Three of the four statistical significance levels reported were only .03, but this was not reported in the summary of the paper, and was buried in hard-to-find places in the text.

Sin #5: using presumptuous or loaded language in the paper, such as referring in the paper to the non-movement of an animal as “freezing” and referring to some supposedly "preferentially activated" cell as an "engram cell." 

Papers claiming to find evidence of memory engrams are often guilty of using presumptuous language that presupposes what they are attempting to prove. For example,  the non-movement of a rodent in an experiment is referred to by the loaded term "freezing," which suggests an animal freezing in fear, even though we have no idea whether the non-movement actually corresponds to fear.  Also, some cell that is guessed to be a site of memory storage (because of some alleged "preferential activation" that is typically no more than a fraction of 1 percent) is referred to repeatedly in the papers as an "engram cell,"  which means a memory-storage cell, even though nothing has been done to establish that the cell actually stores a memory. 

We can imagine a psychology study using similar loaded language.  The study might make hidden camera observations of people waiting at a bus stop.  Whenever the people made unpleasant expressions, such expressions would be labeled in the study as "homicidal thoughts."  The people who had slightly more of these unpleasant expressions would be categorized as "murderers."   The study might say, "We identified two murderers at the bus stop from their increased display of homicidal expressions." Of course, such ridiculously loaded, presumptuous language has no place in a scientific paper.  It is almost as bad for "memory engram" papers to be referring so casually to "engram cells" and "freezing" when neither fear nor memory storage at a specific cell has been demonstrated.  We can only wonder whether the authors of such papers were thinking something like, "If we use the phrase engram cells as much as we can, maybe people will believe we found some evidence for engram cells." 

Sin #6: failing to mention or test alternate explanations for the non-movement of an animal (called “freezing”), explanations that have nothing to do with memory recall.

A large fraction of all "memory engram" papers hinge on judgments that some rodent engaged in increased "freezing behavior,"  perhaps while some imagined "engram cells" were electrically or optogenetically stimulated. A science paper says that it is possible to induce freezing in rodents by stimulating a wide variety of regions. It says, "It is possible to induce freezing by activating a variety of brain areas and projections, including the hippocampus (Liu et al., 2012), lateral, basal and central amygdala (Ciocchi et al., 2010); Johansen et al., 2010; Gore et al., 2015a), periaqueductal gray (Tovote et al., 2016), motor and primary sensory cortices (Kass et al., 2013), prefrontal projections (Rajasethupathy et al., 2015) and retrosplenial cortex (Cowansage et al., 2014).” 

But we are not informed of such a reality in quite a few papers claiming to supply evidence for an engram. In such studies typically a rodent will be trained to fear some stimulus. Then some part of the rodent's brain will be stimulated when the stimulus is not present. If the rodent is nonmoving (described as "freezing") more often than a rodent whose brain is not being stimulated, this is hailed as evidence that the fearful memory is being recalled by stimulating some part of the brain.  But it is no such thing. For we have no idea whether the increased freezing or non-movement is being produced merely by the brain stimulation, without any fear memory, as so often occurs when different parts of the brain are stimulated.

If a scientist thinks that some tiny part of a brain stores a memory, there is an easy way to test whether there is something special about that part of the brain. The scientists could do the "stimulate cells and test fear" kind of test on multiple parts of the brain, only one of which was the area where the scientist thought the memory was stored. The results could then be compared, to see whether stimulating the imagined "engram cells" produced a higher level of freezing than stimulating other random cells in the brain. Such a test is rarely done. 

Sin #7: a dependency on arbitrarily analyzed brain scans or an uncorroborated judgment of "freezing behavior" which is not a reliable way of measuring fear.

A crucial element of a typical "memory engram" science paper is a judgment of what degree of "freezing behavior" a rodent displayed.  The papers typically equate non-movement with fear coming from recall of a painful stimulus. This doesn't make much sense. Many times in my life I saw a house mouse that caused me or someone else to shreik, and I never once saw a mouse freeze. Instead, they seem invariably to flee rather than to freeze. So what sense does it make to assume that the degree of non-movement ("freezing") of a rodent should be interpreted as a measurement of fear?  Moreover, judgments of the degree of "freezing behavior" in mice are too subjective and unreliable. 

Fear causes a sudden increase in heart rate in rodents, so measuring a rodent's heart rate is a simple and reliable way of corroborating a manual judgment that a rodent has engaged in increased "freezing behavior." A scientific study showed that heart rates of rodents dramatically shoot up instantly from 500 beats per minute to 700 beats per minute when the rodent is subjected to the fear-inducing stimuli of an air puff or a platform shaking. But rodent heart rate measurements seem to be never used in "memory engram" experiments. Why are the researchers relying on unreliable judgments of "freezing behavior" rather than a far-more-reliable measurement of heart rate, when determining whether fear is produced by recall? In this sense, it's as if the researchers wanted to follow a technique that would give them the highest chance of getting their papers published, rather than using a technique that would give them the most reliable answer as to whether a mouse is feeling fear. 


animal freezing

Another crucial element of many "memory engram" science papers is analysis of brain scans.  But there are 1001 ways to analyze the data from a particular brain scan.  Such flexibility almost allows a researcher to find whatever "preferential activation" result he is hoping to find.  

Page 68 of this paper discusses how brain scan analysis involves all kinds of arbitrary steps:

"The time series of voxel changes may be motion-corrected, coregistered, transformed to match a prototypical brain, resampled, detrended, normalized, smoothed, trimmed (temporally or spatially)...Furthermore, each of these steps can be done in a number of ways, each with many free parameters that experimenters set, often arbitrarily....The wholebrain analysis is often the first step in defining a region of interest in which the analyses may include exploration of time courses, voxelwise correlations, classification using support vector machines or other machine learning methods, across-subject correlations, and so on. Any one of these analyses requires making crucial decisions that determine the soundness of the conclusions."

The problem is that there is no standard way of doing such things. Each study arbitrarily uses some particular technique, and it is usually true that the results would have been much different if some other brain scan analysis technique had been used. 

Examples of Such Shortcomings

Let us look at a recent paper that claimed evidence for memory engrams. The paper stated, “Several studies have identified engram cells for different memories in many brain regions including the hippocampus (Liu et al., 2012; Ohkawa et al., 2015; Roy et al., 2016), amygdala (Han et al., 2009; Redondo et al., 2014), retrosplenial cortex (Cowansage et al., 2014), and prefrontal cortex (Kitamura et al., 2017).” But the close examination below will show that none of these studies are robust evidence for memory engrams in the brain. 

Let's take a look at some of these studies. The Kitamura study claimed to have “identified engram cells” in the prefrontal cortex is the study “Engrams and circuits crucial for systems consolidation of a memory.”  In Figure 1 (containing multiple graphs), we learn that the number of animals used in different study groups or experimental activities were 10, 10, 8, 10, 10, 12, 8, and 8, for an average of 9.5. In Figure 3 (also containing multiple subgraphs), we have even smaller numbers. The numbers of animals mentioned in that figure are 4, 4, 5, 5, 5, 10, 8, 5, 6, 5 and 5. None of these numbers are anything like what would be needed for a moderately convincing result, which would be a minimum of 15 animals per study group. So the study is very guilty of Sin #3. The study is also guilty of Sin #2, because no detailed description is given of an effective blinding protocol. The study is also guilty of Sin #4, because Figure 3 lists two statistical significance values of “< 0.05” which is the least impressive result you can get published nowadays. Studies reaching a statistical significance of less than 0.01 will always report such a result as “< 0.01” rather than “<0.05.”  The study is also guilty of Sin #7, because it relies on judgments of freezing behavior of rodents, which were not corroborated by something such as heart rate measurements. 

The Liu study claimed to have “identified engram cells” in the hippocampus of the brain is the study “Optogenetic stimulation of a hippocampal engram activates fear memory recall.” We see in Figure 3 that inadequate sample sizes were used. The number of animals listed in that figure (during different parts of the experiments) are 12, 12, 12, 5, and 6, for an average of 9.4. That is not anything like what would be needed for a moderately convincing result, which would be a minimum of 15 animals per study group. So the study is  guilty of Sin #3. The study is also guilty of Sin #7. The experiment relied crucially on judgments of fear produced by manual assessments of freezing behavior, which were not corroborated by any other technique such as heart-rate measurement. The study does not describe in detail any effective blinding protocol, so it is also guilty of Sin #2. The study is also guilty of Sin #6. The study involved stimulating certain cells in the brains of mice, with something called optogenetic stimulation. The authors have assumed that when mice freeze after stimulation, that this is a sign that they are recalling some fear memory stored in the part of the brain being stimulated. What the authors neglect to tell us is that stimulation of quite a few regions of a rodent brain will produce freezing behavior. So there is actually no reason for assuming that a fear memory is being recalled when the stimulation occurs. 

The Ohkawa study claimed to have “ identified engram cells” in the hippocampus of the brain is the study “Artificial Association of Pre-stored Information to Generate a Qualitatively New Memory.” In Figure 3 we learn that the animal study groups had a size of about 10 or 12, and in Figure 4 we learn that the animal study groups used were as small as 6 or 8 animals. So the study is guilty of Sin #3. Because the paper used a “zap their brains and look for freezing” approach, without discussing or testing alternate explanations for freezing behavior having nothing to do with memory, the Ohkawa study is also guilty of Sin #6. Judgment of fear is crucial to the experimental results, and it was done purely by judging "freezing behavior," without measurement of heart rate.  So the study is also guilty of Sin #7. This particular study has a few skimpy phrases which claims to have used a blinding protocol: “Freezing counting experiments were conducted double blind to experimental group.” But no detailed discussion is made of how an effective blinding protocol was achieved, so the study is also guilty of Sin #2.

The Roy study claimed to have “identified engram cells” in the hippocampus of the brain is the study "Memory retrieval by activating engram cells in mouse models of early Alzheimer’s disease."  Looking at Figure 1, we see that the study groups used sometimes consisted of only 3 or 4 animals, which is a joke from any kind of statistical power standpoint. Looking at Figure 3, we see the same type of problem. The text mentions study groups of only "3 mice per group," "4 mice per group," and "9 mice per group,"  and "10 mice per group."   So the study is guilty of Sin #3. Although a blinding protocol is mentioned in the skimpiest language,  no detailed discussion is made of how an effective blinding protocol was achieved, so the study is also guilty of Sin #2.  Some of the results reported have a statistical significance of only "<.05," so the study is guilty of Sin #4. 

The Han study (also available here) claimed to have “identified engram cells” in the amygdala is the study "Selective Erasure of a Fear Memory." In Figure 1 we see a larger-than average sample size was used for two groups (17 and 24), but that a way-too-small sample size of only 4 was used for the corresponding control group. You need a sufficiently high number of animals in all study groups, including the control group, for a reliable result.  The same figure tells us that in another experiment the number of animals in the study group were only 5 or 6, which is way too small. Figure 3 tells us that in other experiments only 8 or 9 mice were used, and Figure 4 tells us that in other experiments only 5 or 6 mice were used. So this paper is guilty of Sin #3. No mention is made in the paper of any blinding protocol, so this paper is guilty of Sin #2. Figure 4 refers to two results with a borderline statistical significance of only "< 0.05," so this paper is also guilty of Sin #4.  The paper relies heavily on judgments of fear in rodents, but these were uncorroborated judgments based on "freezing behavior," without any measure of heart rate to corroborate such judgments. So the paper is also guilty of Sin #7. 

The Redondo study claimed to have “identified engram cells” in the amygdala is the study "Bidirectional switch of the valence associated with a hippocampal contextual memory engram."  We see 5 or 6 results reported with a borderline statistical significance of only "< 0.05," so this paper is  guilty of Sin #4. No detailed description is given of how an effective blinding protocol was achieved, and only the skimpiest mention is made of blinding, so this paper is guilty of Sin #2.  The study used only "freezing behavior" to try to measure fear, without corroborating such a thing by measuring heart rates.  So the paper was guilty of Sin #7.  The study involved stimulating certain cells in the brains of mice, with something called optogenetic stimulation. The authors have assumed that when mice freeze after stimulation, that this is a sign that they are recalling some fear memory stored in the part the brain being stimulated. What the authors neglect to tell us is that stimulation of quite a few regions of a rodent brain will produce freezing behavior. So there is actually no reason for assuming that a fear memory is being recalled when the stimulation occurs.  So the study is also guilty of Sin #6. 

The Cowansage study claimed to have “identified engram cells” in the retrosplinial cortex of the brain is the study "Direct Reactivation of a Coherent Neocortical Memory of Context." Figure 2 tells us that only 12 mice were used for one experiment. Figure 4 tells us that only 3 and 5 animals were used for other experiments. So this paper is guilty of Sin #3. No detailed description is given of how an effective blinding protocol was achieved, and only the skimpiest mention is made of blinding, so this paper is guilty of Sin #2.    It's a paper using the same old "zap rodent brains and look for some freezing behavior" methodology, without explaining why such results can occur for reasons having nothing to do with memory recall. So the study is guilty of Sin #6. Some of the results reported have a statistical significance of only "<.05," so the study is guilty of Sin #4. 

So I have examined each of the papers that were claimed as evidence for memory traces or engrams in the brain. Serious problems have been found in every one of them.  Not a single one of the studies made a detailed description of how an effective blinding protocol was executed. All of the studies were guilty of Sin #7.  Not a single one of the studies makes a claim to have followed some standardized method of brain scan analysis. Whenever there are brain scans we can say that the experiments merely chose one of 101 possible ways to analyze brain scan data. Not a single one of the studies has corroborated "freezing behavior" judgments by measuring heart rates of rodents to determine whether the animals suddenly became afraid. But all of the studies had a depenency on either brain scanning, uncorroborated freezing behavior judgments, or both. The studies all used sample sizes far too low to get a reliable result (although one of them used a decent sample size to get part of its results). 

The papers I have discussed are full of problems, and do not provide robust evidence for any storage of memories in animal brains. There is no robust evidence that memories are stored in the brains of any animal, and no robust evidence that any such thing as an "engram cell" exists. 

The latest press report of a "memory wonder" produced by scientists is a claim that scientists implanted memories in the brains of songbirds. For example, the Scientist magazine has an article entitled, "Researchers Implant Memories in Zebra Finch Brains." The relevant scientific study is hidden behind a paywall of Science magazine. But by reading the article, we can get enough information to have the strongest suspicion that the headline is a bogus brag. 

Of course, the scientists didn't actually implant musical notes into the brains of birds.  Nothing of the sort could ever occur, because no one has the slightest idea of how learned or episodic information could ever be represented as neural states. The scientists merely gave little bursts of energy into the brains of some birds. The scientists claimed that the birds who got shorter bursts of energy tended to sing shorter songs. "When these finches grew up, they sang adult courtship songs that corresponded to the duration of light they’d received," the story tells us.  Of course, it would be not very improbable that such a mere "duration similarity" would occur by chance.  

It is very absurd to be describing such a mere "duration similarity" as a memory implant.  It was not at all true that the birds sung some melody that had been artifically implanted in their heads.  The scientists in question have produced zero evidence that memories can be artificially implanted in animals.  From an example like this, we get the impression that our science journalists will uncritically parrot any claim of success in brain experiments with memory, no matter how glaring are the shortcomings of the relevant study. 

There is no robust evidence for engram cells, and those who have tried to present evidence for memory storage cells have never been able to articulate a coherent detailed theory about how human memory experiences or human learned knowledge could ever be translated into neural states or cell states.  The engram theorist is therefore like a person who claims there is evidence for a city floating way up in the sky, but who is unable to tell you how a city could be floating in the air.