Sunday, February 24, 2019

"Brains Store Memories" Dogma Versus the Reality of Noisy Brains

Neuroscientists typically maintain that human mental phenomena are entirely produced by the brain. But this claim is inconsistent with many low-level facts that neuroscientists have discovered. Remarkably, the facts and details that neuroscientists have learned on a low level frequently contradict the dogmatic high-level assertions neuroscientists make.

The table below summarizes this conflict.

High-level Neuroscientist Claims Low-Level Facts Discovered by Neuroscientists
Brains produce thinking” Human cognitive ability and memory is not strongly damaged by hemispherectomy operations in which half of a brain is removed to treat epilepsy seizures. 
Most of Lorber's hydrocephalus patients with brains mostly consisting of watery fluid had above average intelligence, and a Frenchman was able to long hold a civil service job while almost all of his brain was gone.
Brain scans do not show brains working significantly harder during either heavy thinking or recall, and no signal change greater than 1% occurs during such activities.
When we do accurate mental calculations, it is our neurons that are doing the work” Neurons are noisy, and synapses transmit signals with only a 50% likelihood or less– the type of thing that should prevent accurate mental arithmetic as savants can perform.
Our memories are stored in our brains” Neurons and synapses have been extensively examined at very high microscopic resolutions, and no sign of stored information or encoded information has been found in them other than the gene information in DNA.
There is high protein turnover in the synapses that neuroscientists claim to be the storage place of memories, and the average lifetime of the proteins that make up synapses is only a few weeks – only a thousandth of the lifespan of very old memories in old people.
There seems to be nothing in the human brain resembling the write mechanism like we see in storage systems such as computers.
When we remember, we read data from our brains.” There seems to be nothing in the human brain resembling the read mechanism like we see in storage systems such as computers.
There is in the human brain no position coordinate system, no indexing, no neuron numbering system, nor anything else that would seem to make possible an instantaneous recall of information from some very precise location in a brain, in a manner similar to a retrieval of data from a particular page of a particular book
Although we would expect information to be reliably transmitted across neurons during precise and accurate human recall, neurons are actually quite noisy, and transmit signals with only a low reliability.
Synaptic density studies show that the the density of synapses in brains strongly drops between puberty and adulthood, at the very time when learned knowledge is piling up.

By following the links above, you can read detailed discussions of the claims I make in the right column – except for my claims about neurons being very noisy, which I will justify in this post. 

When we talk about the noise in a communication system, we can imagine this as a kind of static that prevents the transmission from occurring without errors. A young reader may not even know what static is, since nowadays digital communication occurs with very little noise. But I experienced static frequently in my youth, back in the days long before the internet. One type of static would occur when I listened to the radio. When I tuned in to a radio station too far away, the radio signal would be mixed with a crackling noise or static that might prevent me from hearing particular words or musical notes in the transmission. In my youth there was also a problem with television noise or static. On top of a TV set there would be an antenna, and if it wasn't pointing just right, a TV signal might be rather noisy. The noise might be of a visual type, with random little blips appearing on the TV screen. Sometimes the static would be so bad you couldn't see much of anything on the TV you recognized.

The table below illustrates an example of noise in a signal transmission system.

Type of system Input Output
Low-noise system Toto, I've a feeling we're not in Kansas anymore.” Toto, I've a feeling we're not in Kansas anymore.”
High-noise system Toto, I've a feeling we're not in Kansas anymore.” Tojo, I've a f2@eling we're Xot in K3$sas anymore.”

A neuron acts as an electrical/chemical signal transmitter. A neuron will receive an electrical/chemical input, and transmit an electrical/chemical output. But a neuron does not act as efficiently and reliably as a cable TV wire or a computer cable that transmits signals with a very low error rate. Neuroscientists know that a large amount of noise occurs when neurons transmit signals. In other words, when a neuron receives a particular electrical/chemical input signal, there is a very significant amount of chance and variability involved in what type of electrical/chemical output will come out of the neuron. The article on “neuronal noise” identifies many different types of noise that might degrade neuron performance: thermal noise, ionic conductance noise, ion pump noise, ion channel shot noise, synaptic release noise, synaptic bombardment, and connectivity noise.

In a very recent interview, an expert on neuron noise states the following:

There is, for example, unreliable synaptic transmission. This is something that an engineer would not normally build into a system. When one neuron is active, and a signal runs down the axon, that signal is not guaranteed to actually reach the next neuron. It makes it across the synapse with a probability like one half, or even less. This introduces a lot of noise into the system.

So according to this expert, synapses (the supposed storage place of human memories) transmit signals with a probability of less than 50 percent. Now that's very heavy noise – the kind of noise you would have if half of the characters in your text messages got scrambled by your cell phone carrier. A scientific paper tells us, “Neuronal variability (both in and across trials) can exhibit statistical characteristics (such as the mean and variance) that match those of random processes.” Another scientific paper tells us that Neural activity in the mammalian brain is notoriously variable/noisy over time.”

This is a problem for all claims that memories are retrieved from brains, because humans are known to be able to remember things very accurately, but “neural noise limits the fidelity of representations in the brain,” as a scientific paper tells us.

Now, a neuroscientist might claim that such facts can still be reconciled with the mental performance of humans. He might argue like this:

Yes, neurons are pretty slow and noisy, but that's why human memory is slow and unreliable. Think of how it works when you suddenly see some old schoolmate that you haven't seen in twenty years. It may be a while before you remember their name. And when you remember something about that person, your memory will probably be not terribly accurate. So you have a kind of a slow “noisy” memory.

But it is easy to come up with examples of human memory performing without error in a noiseless manner. I just closed my eyes and recited the following lines without any error at a rate faster than you can read these lines aloud:

I am the very model of a modern Major-General
I've information vegetable, animal, and mineral
I know the kings of England, and I quote the fights historical
From Marathon to Waterloo, in order categorical

I'm very well acquainted, too, with matters mathematical
I understand equations, both the simple and quadratical
About binomial theorem I'm teeming with a lot o' news
With many cheerful facts about the square of the hypotenuse

But that's not very impressive, for there are singers who can flawlessly sing without any errors at a very rapid pace the entire delightful song “I Am the Very Model of a Modern Major General” from Gilbert and Sullivan's “The Pirates of Penzance,” and the song is about eight times longer than what I have quoted. Also, in the world of opera there are singers who can flawlessly sing every note and every word of the part of Hans Sachs in Wagner's four-hour opera Die Meistersinger von Nurnberg, an opera in which Hans is on stage singing for a large fraction of those four-hours. There are other singers who can flawlessly sing the title role in the opera Siegfried, which requires the lead singer to sing on stage for most of its three hours. There are other singers who can flawlessly sing the role of Tristan, which also requires a similar demand. In such cases we have a very rapid and flawless error-free retrieval of an amount of information that would take many, many pages to write down.

A rock singer at a funky free-wheeling concert might get away with an error rate of 2% in his memory recall of words, but opera fans are very intolerant of errors. When Wagner fans (who have typically heard an opera many times on recordings) go to something like the Bayreuth festival, they expect singers to recall Wagner's notes and words with 100% fidelity, and that is what they usually get, even when hearing roles such as Tristan and Siegfried which require a singer to memorize hours of singing.  Every time an actor performs Hamlet, he recites 1480 lines of dialog, and many such actors recall all such lines without any errors. 

neuron noise

Then there is Leslie Lemke, who according to this article in "can remember and play back a musical piece of any length flawlessly after hearing it once."  It is well documented that there are quite a few Muslims who can recite the entire holy book of their religion, a book of some 80,000 words. Then there are people who flawlessly remember content that is hard to remember. According to the site of the Guiness Book of World Records, Rajveer Meena memorized pi to 70,000 digits, reciting those 70,000 digits without any errors. Lu Chao memorized pi to 67,000 digits. A 1917 scientific paper stated that one or more people had accurately "memorized the exact layout of words in more than 5,000 pages of the 12 books of the standard edition of the Babylonian Talmud."

How could such feats occur if memory retrieval is being performed by neurons and synapses that are very noisy? They cannot be. In these cases, human memory is acting at a reliability vastly surpassing what should be possible if memory retrieval or thought is a neural phenomenon.  A scientific paper states, "Neural noise limits the fidelity of representations in the brain."  But humans such as those I have mentioned seem to be able to recall huge amounts of learned text or song without any such problem of a degradation of "fidelity of representations." 

A similar conclusion is forced on us when we consider the accuracy of the most impressive human calculators. In 2004 Alexis Lemaire was able to calculate in his head the 13th root of this number:

85,877,066,894,718,045, 602,549,144,850,158,599,202,771,247,748,960,878,023,151, 390,314,284,284,465,842,798,373,290,242,826,571,823,153, 045,030,300,932,591,615,405,929,429,773,640,895,967,991,430,381,763,526,613,357,308,674,592,650,724,521,841,103,664,923,661,204,223

In only 77 seconds, according to the BBC, Lemaire was able to state that it is the number 2396232838850303 which when multiplied by itself 13 times equals the number above.  Here we have calculation accuracy far beyond anything that could be possible if noisy neurons are the source of human thought. 

Given the high amount of noise in neurons and synapses, which would strongly degrade the accuracy of neural memory retrieval and neural signal transmission, the facts of very accurate human calculation and very accurate human memory recall (as shown by calculation savants, Hamlet actors, and Wagnerian opera singers) are very much in conflict with the dogmas that our thinking is performed by our brains and our memories are stored in and retrieved by our brains.  This is yet another case in which the low-level facts of neuroscience defy the dogmatic claims of neuroscientists. 

Think for a moment about the implications if a synapse can only transmit a signal with about a 50% reliability, as indicated by the previously quoted expert on neuron noise. This does not at all mean that people would recall things with about 50% accuracy if memories are stored in brains; it's much worse than that. Since any act of neural memory retrieval would involve innumerable different signal transmissions through innumerable neurons, we would expect the actual accuracy to be only some tiny fraction of 50% if we were using synapses to retrieve our learned knowledge.  Similarly, if you play the game "Chinese whispers" (also called "gossip") at a school lunch table, and have everyone at the table be playing noisy music in earphones as they hear the gossip story being whispered among the players, the tenth person to receive the story will be unlikely to receive even 20 percent of it accurately. 

Let us imagine a planet in which the sky was perpetually covered in very thick clouds, so that no one had seen the stars or the local sun.  On such a planet there would be a great mystery: from where comes the heat that keeps life on the planet warm? If you were a rather clumsy thinker on such a planet, you might come up with some cheesy theory to explain the heat on your planet, and dogmatically cling to it -- maybe the theory that rocks on your planet warm the planet through radioactivity, or that heat shoots up from the hot core of the planet. But if you were a better thinker, you would say, "There is nothing anyone has observed that can explain this planet's heat -- it must come from some mysterious unseen reality."  It is something similar that we should say about our mental capabilities: that nothing we have observed can explain them, and that they must come mainly from some mysterious unseen reality. 

Monday, January 7, 2019

Memories Can Form Many Times Faster Than the Speed of Synapse Strengthening

The main theory of a brain storage of memories is that people acquire new memories through a strengthening of synapses. There are many reasons for doubting this claim. One is that information is generally stored through a writing process, not a strengthening process. It seems that there has never been a verified case of any information being stored through a process of strengthening.

If it were true that memories were stored by a strengthening of synapses, this would be a slow process. The only way in which a synapse can be strengthened is if proteins are added to it. We know that the synthesis of new proteins is a rather slow effect, requiring minutes of time. In addition, there would have to be some very complicated encoding going on if a memory was to be stored in synapses. The reality of newly-learned knowledge and new experience would somehow have to be encoded or translated into some brain state that would store this information. When we add up the time needed for this protein synthesis and the time needed for this encoding, we find that the theory of memory storage in brain synapses predicts that the acquisition of new memories should be a very slow affair, which can occur at only a tiny bandwidth, a speed which is like a mere trickle. But experiments show that we can actually acquire new memories at a speed more than 1000 times greater than such a tiny trickle.

One such experiment is the experiment described in the scientific paper “Visual long-term memory has a massive storage capacity for object details.” The experimenters showed some subjects 2500 images over the course of five and a half hours, and the subjects viewed each image for only three seconds. Then the subjects were tested in the following way described by the paper:

Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high  (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images.

In this experiment, pairs like those shown below were used. A subject might be presented for 3 seconds with one of the two images in the pair, and then hours later be shown both images in the pair, and be asked which of the two was the one he saw.

Although the authors probably did not intend for their experiment to be any such thing, their experiment is a great experiment to disprove the prevailing dogma about memory storage in the brain. Let us imagine that memories were being stored in the brain by a process of synapse strengthening. Each time a memory was stored, it would involve the synthesis of new proteins (requiring minutes), and also the additional time (presumably requiring additional minutes) for an encoding effect in which knowledge or experienced was translated into neural states. If the brain stored memories in such a way, it could not possibly keep up with remembering images that appeared for only three seconds each in a long series. It would be a state of affairs like that depicted in what many regard as the funniest scene that appeared in the “I Love Lucy” TV series, the scene in which Lucy and her friend Ethel were working on a confection assembly line. In that scene Lucy and Ethel were supposed to wrap chocolates that were moving along a conveyor belt. But while the chocolates moved slowly at first, the conveyor belt kept speeding up faster and faster, totally exceeding Lucy and Ethel's ability to wrap the chocolates (with ensuing hilarious results).

The experiment described above in effect creates a kind of fast moving conveyor belt in which images fly by at a speed so fast that it should totally defeat a person's ability to memorize accurately – if our memories were actually being created through the slow process imagined by scientists, in which each memory requires a protein synthesis requiring minutes, and an additional time (probably additional minutes) needed for encoding. But nonetheless the subjects did extraordinarily well in this test.

There is only one conclusion we can draw from such an experiment. It is that the bandwidth of human memory acquisition is vastly greatly than anything that can be accounted for by neural theories of memory storage. We do not remember at the speed of synapse strengthening, which is a snail's speed similar to the speed of arm muscle strengthening. We instead are able to form new memories in a manner that is basically instantaneous. The authors of the scientific paper state that their results “pose a challenge to neural models of memory storage and retrieval.” That is an understatement, for we could say that their results are shockingly inconsistent with prevailing dogmas about how memories are stored.

There are some people who are able to acquire new memories at an astonishing rate. The autistic savant Kim Peek was able to recall everything he had read in the more than 7000 books he had read. Here we had a case in which memorization occurred at the speed of reading. Stephen Wiltshire is an autistic savant who has produced incredibly detailed and accurate artistic works depicting cities that he has seen only from a brief helicopter ride or boat ride. Of Wiltshire, savant expert Darold Treffert says, "His extraordinary memory is illustrated in a documentary film clip, when, after a 12-minute helicopter ride over London, he completes, in 3 hours, an impeccably accurate sketch that encompasses 4 square miles, 12 major landmarks and 200 other buildings all drawn to scale and perspective." Again, we have a case in which memories seem to be formed at an incredibly fast rate. Savant Daniel Tammet (who one time publicly recited accurately the value of pi to 22,514 digits) was able to learn the Icelandic language in only 7 days. Derek Paravicini is a blind and brain-damaged autistic savant who has the incredible ability to replay any piece of music he has heard for the first time. In 2007 the Guardian reported the following:

Derek is 27, blind, has severe learning difficulties, cannot dress or feed himself - but play him a song once, and he will not only memorize it instantly, but be able to reproduce it exactly on the piano. One part of his brain is wrecked; another has a capacity most of us can only dream of.

Other savants such as Leslie Lemke and Ellen Boudreaux have the same extraordinary ability to replay perfectly a song heard for the first time. 

Cases such as these are inconsistent with prevailing theories of memory. Are we to believe that such people (typically with substantial brain damage) can somehow synthesize proteins in their brains ten times or thirty times faster than the average human, so that their synapses can get bulked up ten times or thirty times faster? That's hardly credible. But if memories are not actually stored in brains, but stored in or added to a human psychic or spiritual facility, something like a soul, then there would be no reason why the brain-damaged might not have astonishing powers of memorization.

Some people can form memories 1000 times faster than should be possible under prevailing theories of brain memory storage, which involve postulating protein synthesis and encoding operations that should take minutes. This thousand-fold shortfall in only one of three thousand-fold shortfalls of the prevailing theory of brain memory storage. The two other shortfalls are: (1) humans can remember things for 50 years or more, which is 1000 times longer than the synaptic theory of memory storage can account for (synapses having average protein lifetimes of only a few weeks); (2) humans can recall things 1000 times faster than should be possible if you stored something in some exact location of the brain. If you stored a memory in your brain (an organ with no numbering system or coordinate system), it would be like throwing a needle onto a mountain-sized heap of needles, in the sense that finding that exact needle at some later point should take a very long time.

The imaginary conversation below illustrates some of the many ways in which prevailing dogma about brain memory storage fails. It's the kind of conversation that might occur if memories were formed according to the "brain storage of memory" dogmas that currently prevail among neuroscientists. 

Costello: Alright, guy, I'm now going to teach you an important geographical fact: which city is the capital city of Spain.
Abbott: Go ahead, I'm all ears.
Costello: Okay, here it is. The capital city of Spain is Madrid.
Abbott: Okay, I'll try to remember that.
Costello: So what is the capital city of Spain?
Abbott: I haven't formed the memory of that yet. It takes time. I'm still synthesizing the proteins I need to strength my synapses, so I can remember that.
Costello: So try hard. Remember, Madrid is the capital city of Spain.
Abbott: I'm working on forming the memory.
Costello: So do you remember by now what the capital city of Spain is?
Abbott: Don't ask me too soon. It takes minutes to synthesize those proteins.

After five additional minutes like this, the conversation continues.

Costello: Okay, so it's been five minutes since I first told you what the capital city of Spain is. You should have had enough time to have formed your memory of this fact.
Abbott: I'm sure by now I have formed that memory, because there has been enough time for protein synthesis in my synapses.
Costello: So what is the capital city of Spain?
Abbott: I can't recall.
Costello: But you formed the memory by now. Why can't you recall it?
Abbott: The problem is that I don't know exactly where in my brain the memory was stored. So I can't just instantly recall the memory. The memory is like a tiny needle in a haystack. There's no way I can find that quickly.
Costello: Can't you just search through all the memories in your brain, looking for this one?
Abbott: I could try, but it would take hours or days to search through all those memories.
Costello: Sheesh, this is driving me crazy. How about this? I can teach you that Madrid is the capital city of Spain, and when you form the memory, you can tell me the exact tiny spot where your memory was formed. So maybe you'll tell me, “Okay I stored that memory at brain neuron number 273,835,235.” Then I'll just say to you something like, “Please look in your brain at neuron number 273,835,235, and retrieve the memory you stored of what is the capital city of Spain.”
Abbott: That's a brilliant idea!
Costello: Thanks.
Abbott: On second thought, it will never work.
Costello: Why not?
Abbott: Neurons aren't numbered, and the brain has no coordinate system. It's like some vast city in which none of the streets are named, and none of the houses have house numbers. So if I put a memory in one little “house” in the huge brain city, I'll never be able to tell you the exact address of that house.
Costello: So how the hell am I supposed to teach you anything?
Abbott: Beats me. And if I ever learn anything new, I'm sure I won't remember it for more than a few weeks. That's because there's a big problem with those proteins that I will synthesize to store those new memories. They have average lifetimes of only a few weeks.

As long as they cling to “brain storage of memory” dogmas, our neuroscientists will never be able to overcome difficulties such as those mentioned in this conversation.

Thursday, December 20, 2018

The Lack of a Viable Theory of Neural Memory Encoding

If we are to believe in the claim that brains store human memories, we must have a credible account of four things: encoding, neural storage of very old memories, the instantaneous formation of memories, and the instantaneous retrieval of memories. The theory that human memories are stored in the brain fails in regard to each of these things.

There exists no plausible theory as to how a brain could store memories lasting for 50 years, but we know humans can remember many things for that long. The most popular idea of brain memory storage claims that memories are stored in synapses, but the proteins in synapses have an average lifetime of less than two weeks, meaning such a theory falls short by a factor of 1000 when it comes to explaining memories that persist for 50 years. As for memory retrieval, there is no theory explaining how humans could possibly recall instantly things they learned many years ago, and haven't thought about in years. You may hear the name of some obscure historical or cultural figure you learned about decades ago, and haven't heard about or thought about since that time. You may then instantly recall something about that person. But if that memory was stored somewhere in your brain, how could you instantly find the exact little location where that memory was? Doing that (for example, instantly finding a memory in storage spot 834,220 out of 1,200,000) would be like instantly finding a needle in a mountain-sized haystack. If a brain had an indexing system, or a coordinate system, or a neuron numbering system, there might be a faint hope for explaining instantaneous memory retrieval; but the brain has no such things. As for the instantaneous formation of memories, there is no theory that can account for it in a brain. The prevailing theory that memories are stored by synapse strengthening (which would involve protein synthesis requiring minutes) fails to account for memories that humans can form instantly.

When we consider the issue of memory encoding, we find a difficulty as great as the difficulties just discussed. Encoding is supposedly some translation that occurs so that a memory can be physically stored in a brain, so that it might last for years. The problem is that human memories include incredibly diverse types of things, and we have no idea how most of these things could be stored as neural states. Consider only a few of the types of things that can be stored in a human memory:

  • Memories of daily experiences, such as what you were doing on some day
  • Facts you learned in school, such as the fact that Lincoln was shot at Ford's Theater
  • Sequences of numbers such as your social security number
  • Sequences of words, such as the dialog an actor has to recite in a play
  • Sequences of musical notes, such as the notes an opera singer has to sing
  • Abstract concepts that you have learned
  • Memories of particular non-visual sensations such as sounds, food tastes, smells, pain, and physical pleasure
  • Memories of how to do physical things, such as how to ride a bicycle
  • Memories of how you felt at emotional moments of your life
  • Rules and principles, such as “look both ways before crossing the street”
  • Memories of visual information, such as what a particular person's face looks like

How could all of these very different types of information ever be translated into neural states so that a brain could store them?

Our neuroscientists have told us again and again that the brain does such an encoding, but there is no real evidence that any such thing takes place. What we have evidence for is merely evidence that humans remember things. If you are someone who believes that memories are physically stored in brains, then you may claim that memory encoding occurred at such and such a rate whenever you observe people learning something at such and such a rate. But merely observing evidence of learning or memory is not acquiring any actual evidence that encoding has occurred. There remains the possibility that our memories are not stored as neural states, the possibility that our repository of memory is some spiritual or psychic facility that is non-neural and non-biological.

Such a possibility should not seem remote when we consider that there is no workable theory as to how learned knowledge and experiences could be encoded so that they might be stored in a brain. No matter what theory we may create to account for the encoding of learned knowledge and episodic experience so that they can be stored in a brain, such a theory will always end up sounding ridiculous after we examine the theory in detail and consider its requirements and shortcomings. Let's look at some possibilities, and why they fail.

Theory #1: Direct writing of words and images

First, let's consider the simplest theory of encoding we can imagine – that a memory is stored in the brain so that it appears in a neural form pretty much as we see it in our minds. Under this theory, when you memorized some series of words, this would cause a sequence of microscopic little letters to become stored in your brain; and when you experienced some visual experience, this would get stored as some tiny little image in your brain. So, for example, under this theory, if someone memorized the sentence, “There may be green aliens in the center of the galaxy,” then after the person died, some scientist might examine that person's brain with an electron microscope, and actually find some tiny little words in some neurons, words that directly spelled out, “There may be green aliens in the center of the galaxy.” And under this theory, if someone was given a picture of a toy purple pony, and asked to memorize it, then after the person died, a scientist might be able to examine the person's brain under an electron microscope, and the scientist might say, “Aha, I see in his neurons a tiny little image of a toy purple pony.”

This theory may immediately provoke giggles, and it is rather easy to think of some reasons why it does not work. They are these:

  1. If memory worked in such a way, we would surely have already discovered such easily-recognizable memory traces. But no such things have been seen, even though a great deal of human neural tissue has been examined at very high magnification. When we look at brain tissue at the highest magnification, we see no tiny little letters or tiny little images of animals, cars, and persons.
  2. For a brain to be able to write words that we memorized in this type of direct manner, it would seem that the brain would need some very precise write mechanism, capable of forming the exact characters of the alphabet in brain tissue; but no such brain capability is known to exist.
  3. It seems that if such a theory were true, recalling some words would be like reading. But recalling words is almost never like reading, and we don't see in our mind's eye some stream of letters as we recall some words we memorized.
  4. For a brain to be able to read words that we memorized in this type of direct manner, it would seem that the brain would need some very precise reading mechanism, capable of reading the exact characters of the alphabet stored in very tiny letters written in brain tissue; but no such thing is known to exist. We don't have tiny little “micro-eyes” in our brains that might allow us to read tiny microscopic letters stored in our brains.

Theory #2: Brain storage of words and images using some unknown non-binary coding or translation protocol

Now, let's consider a different theory of memory encoding – the idea that instead of directly storing words and images (so that we could directly read the words and directly see the images), the brain uses some type of unknown coding or translation protocols. For example, it could conceivably be that words that we learn are somehow translated into proteins or chemicals or electrical states, using some as-of-yet undiscovered translation scheme.

For example, such a scheme might work a little like this:

Item How the item might be represented
Letter “A” Some particular neural arrangement of atoms, chemicals or electricity
Letter “B” Some other neural arrangement of atoms, chemicals or electricity
Letter “C” Some other neural  arrangement of atoms, chemicals or electricity

Such a scheme might work a little like the Morse code, in which particular letters are translated into some sequence of dots, dashes, or dots and dashes. Some particular arrangement of atoms, chemicals or electricity might work like a dot in the Morse code, and some other particular arrangement of atoms, chemicals or electricity might work like a dash in the Morse code.

Or there could be some higher-level translation system based on particular words rather than letters. For example, we can imagine something like this:

Item How the item might be represented
Word “sun” Some particular neural arrangement of atoms, chemicals, proteins or electricity
Word “man” Some other neural arrangement of atoms, chemicals, proteins or electricity
Word “move” Some other neural  arrangement of atoms, proteins chemicals or electricity

There is one giant problem with such a theory. All of the languages that we use are fairly recent innovations, having been created in only the last few percent of the time that humans have existed. For example, back in the Roman Empire people used Latin, but the English we use today has only been in use for less than 1200 years. The alphabet used for English is less than 1000 years old, and its alphabetic predecessor (the Latin alphabet) is only a few thousand years old. It is generally acknowledged even by Darwinism enthusiasts that very complex evolutionary innovations cannot arise in only a few centuries of time or a few thousand years. So we could never explain how the brain could naturally possess some elaborate translation system based on such a relatively recent innovation as the English language and the English alphabet.

Scientists strain our credulity whenever they talk about novel functional genes accidentally appearing even over the course of a million years. Think, then, on how much greater a problem there would be in explaining how hundreds of novel functional genes could have appeared in less than 3000 years, to perform some translation operation involving characters and words that have existed for less than 3000 years. To assume such a thing would be to assume evolution working thousands of times faster than the rate we would predict from known mutation rates.

There is also no evidence that any such great burst of genetic novelty has occurred. Although the half-life of DNA is 512 years, we have enough samples of human DNA from ancient Rome and ancient Egypt to know that there has been no big change in the DNA of humans during the past 3000 years. So it seems impossible that there could be any genetic capability (arising in the past few thousand years) that would allow humans to neurally store information using some encoding mechanism specifically tailored to the letters and words of the English language that have existed for less than 3000 years.

Another difficulty with the theory of encoding just mentioned is that if it existed, we would see big differences in the genes of people who spoke different languages. According to such a theory, we would expect that Chinese people would have one group of genes corresponding to proteins or RNA molecules needed to translate Chinese words into neural states, and that English speaking people would have some other quite different set of genes corresponding to proteins or RNA molecules needed to translate English words into neural states (particularly since the Chinese language and alphabet is so different from the English language and alphabet). But there exists no such difference in the genes of Chinese speaking people and English speaking people.

There is also the difficulty, discussed more fully in the conclusion of this post, that there is no sign in the human genome that any such genes exist for performing such an elaborate operation of encoding human learned knowledge and episodic experience so that it can be stored in neurons or synapses (and there would need to be many hundreds or thousands of genes dedicated to performing such a task if it was done).

Theory #3: Binary writing of words and images

Now, let's consider a theory of memory encoding that perhaps the words we memorize and the images we remember are stored in binary format. We know that computers store information in binary format, so when it is suggested that the brain may use a similar format, this may sound reasonable to the average person (although it isn't, a brain being radically different from an electronic computer).

This possibility actually has all of the difficulties of the previous possibility. What goes on when your computer stores words in binary format is the following:

  1. First individual letters in the words are converted into decimal numbers (such as 13, 19, and 23) using a particular translation table called the ASCII code.
  2. Then, those numbers are converted from decimal to binary using a decimal-to-binary conversion routine.

So if we are to believe that the brain does binary encoding like a computer, we would need to believe that built into the brain on a low level is some type of translation scheme like the one below, a scheme in which letters are translated into decimal numbers.

ASCII table used by a computer to store encoded information

In addition, we would also have to believe that the brain has some kind of capability to translate the numbers in such a system into binary numbers. Alternately, we could believe that the brain has a scheme for directly translating characters into binary, but the overall complexity of such a translation mechanism would be every bit as great as a system in which characters are converted into decimal, and then into binary.

We have the following difficulties involved with such an idea:

  1. If memory worked in such a way, we would surely have already discovered such easily-recognizable memory traces. We would have discovered tiny little traces in the brain that resemble binary coding. But no such things have been seen, even though a great deal of neural tissue has been examined at very high magnification.
  2. For a brain to be able to write words that were memorized in this type of direct manner, it would seem that the brain would need some very precise write mechanism, capable of writing binary traces; but no such thing is known to exist.
  3. For a brain to be able to read words that we memorized in this type of direct manner, it would seem that the brain would need some very precise reading mechanism, capable of reading in binary; but no such thing is known to exist.
  4. Since the alphabets of human languages are only a few thousand years old, there would have been no time for the human body to have evolved some complex biological mechanism capable of converting specific alphabetic characters to binary.
  5. We can imagine no way in which a brain could achieve the translation effect in which words are translated into binary. As far as we know, there is nothing anything like an ASCII table in your brain, nor is there anything like a facility for translating English letters directly into binary, nor is there anything like a facility for translating English letters into decimal, and then translating decimal numbers into binary. There are no genes in the genome that perform such tasks.

Theory #4: Storage of sensations occurring when something is learned or experienced

Now, let's consider a whole different theory. It could be that instead of storing words that you memorized, a brain might store the sensations that occurred when you memorized something. So imagine I open up a copy of Vogue magazine, and read an ad saying, “You'll feel fresher than the morning dew.” If I memorize that slogan, it might be that my brain is storing the electrical or chemical activity in my brain when I saw that slogan. And if I hear on the radio some advertising slogan of “We'll build your wealth sky-high,” and memorize that, it could be that my brain is storing something corresponding to the electrical and chemical activity in my brain when I heard such a slogan.

One reason for doubting this theory of memory encoding is that perceptions involve large parts of the brain, and it is hard to imagine that sensations involving large fractions of the brain could be stored in a tiny part of the brain. Since our minds store many thousands or millions of visual memories, if we are to believe that memories are stored in brains, we would have to believe that each stored memory uses only a tiny portion of the brain. But my current visual sensations require the involvement of a large fraction of the occipital lobes of the brain – probably many cubic centimeters. But we cannot plausibly imagine that the brain simply dumps the chemical or electrical contents of those cubic centimeters into some memory that took up only a tiny space on my brain, a millionth or less. It would seem, therefore, that if the brain simply dumped your visual and auditory sensations into memory, that it would require a space vastly bigger than itself to store all the memories we have of things we have seen and heard.

Another difficulty in the “memories are sensation dumps” idea is that there is no evidence of any mechanism for copying information from one part of the brain to another. Let's imagine that when a memory forms, the state of your occipital lobes (involved in vision) is copied to some point on the cortex where the memory is stored. That would require some biological functionality for copying the state of one large part of the brain to another part of the brain; but we know of no such functionality.

It is easy to think of another big reason for doubting such a theory. It is that if our brains were to be storing something corresponding to sensations, we would expect that when we remembered words, we would remember something visual, with a characteristic font or color, or something auditory, with a characteristic sound. But while that sometimes happens, in almost all cases it does not work that way.

For example, if I remember the words, “Here's looking at you, kid,” I do remember a very specific audio sound, the distinctive sound of Humphrey Bogart saying that in the movie Casablanca. And if I remember the phrase “Men walk on moon,” I do remember a specific font, the font used in the famous New York Times headline of the Apollo 11 lunar mission. But a very large fraction of my memories do not have specific visual or audio characteristics. For example, if someone asks, “What is the lightest particle in an atom?” I may reply, “The electron is the lightest particle in the atom.” But that memory I have recalled does not have any specific sound or sight associated with it. I don't hear the answer in someone's voice, and I do not see the answer as words in any particular font or color. And if someone asks me, “What is your birthday?” I will remember a particular day. But I will not see in my mind's eye some date written in a particular font and having some particular color, nor will I hear in my mind's ear some particular type of voice stating the answer.

For almost all of my knowledge memories, the same thing is true: when I recall the memory, I don't see something that appears with some particular visual appearance, nor do I hear something that has some particular sound. It seems this would not be the case if the brain was storing my memories by storing visual and auditory sensations.

It is also true that I can memorize something that does not correspond to any particular visual or auditory sensation I had. For example, I can visualize some imaginary thing such as a giant purple elephant. If I think about this imaginary thing enough times, it will become a permanent memory. But this imaginary thing I have memorized does not correspond to any sensation I had. In this case I never saw a giant purple elephant. So it cannot be that my memory of the giant purple elephant was formed from sensations that I had of such a thing. Similarly, a fiction writer can dream up on Tuesday an idea for a short story, and then write that short story on Wednesday, using the memory he formed on Tuesday. But the memory will not correspond to any visual or auditory sensations he had.

It seems that we therefore cannot explain memories as merely being a storage of sensations that we had at some time when we learned something. You learned many thousands of things in school, but when you remember such knowledge, you virtually never remember the sight and sound of your school teacher teaching you such things or the sight of you reading a book telling you such things (as you would if your memory of learned knowledge was just a dump of the sensations you had when you learned such things).


I have reviewed some of the theories that could be used to account for the encoding of learned information as neural states. There are strong reasons for rejecting each such theory. It seems it is impossible to present a specific theory of memory encoding that stands up to scrutiny as a reasonable possibility. 

How is it that neuroscientists sidestep this difficulty? They simply avoid presenting specific theories of how a brain could translate learned information into neural states. An example is the article on “Memory (encoding).” The article tells us, “Encoding allows the perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from short-term or long-term memory.”
But the article fails to discuss any specific theory discussing how such a conversion would work. The article has all kinds of digressions and tangential information, but nowhere does it advance a single specific idea of how learned information (such as a learned sequence of words) could be translated into neural states when a memory was stored in a brain. An article on a memory experiment states, "Press a scientist to tell you how memories are encoded and decoded in the brain, and you’ll soon find that the scientific community doesn’t have an answer."  In his book Crimes of Reason: On Mind, Nature, and the Paranormal, philosopher Stephen Braude says on page 19 that neuroscience "never addresses the fundamental issues of how any physical modification can represent or stand for what is remembered." 

A speculative neuroscience paper confesses that "codifying memories is one of the fundamental problems of modern Neuroscience," but that "the functional mechanisms behind this phenomenon remain largely unknown."  It would have been more accurate to have stated "entirely unknown." 

Just as it is impossible to advance a credible detailed theory as to how Santa Claus could distribute a toy to every good little boy and girl in the world in a single 24 hours, it is impossible to advance a credible detailed theory of how learned knowledge and episodic experience could be encoded and permanently stored as neural states. If memories were to be encoded so that they could be stored in brains, there would be two major “footprints” of such a thing, physical traces showing that it was going on. They are the following:

High repetition of representational building blocks. Whenever encoded information is stored, there is a repetition of two or more things we can call representational building blocks or representational atoms. In binary encoding these representational building blocks are the 1's and 0's or their electromagnetic equivalent. In alphabetic encoding, the representational building blocks are the letters of the alphabetic. In DNA the representational building blocks are the four types of nucleotide base pairs that are repeated over and again. In Morse code, the representational building blocks are dots and dashes. Even if you don't know the system of encoding that was used, it is easy to detect that encoded  information is present, by seeing a high repetition of representational building blocks. If brains stored memories encoded into neural states, we would see a gigantic degree of repetition of some type of representational building blocks.
Genes dedicated to memory encoding. If human brains were to actually be translating thoughts and sensory experiences so that they can be stored as memory traces, such a gigantic job would require a huge number of genes – many times more than the 500 or so genes that are used for the very simple encoding job of translating DNA nucleotide base pairs into amino acids.

There is no sign at all of either of these things in the brain. We absolutely do not see anything like very highly repeated representational building blocks in the brain that might be the footprints of encoded memory information. And we see no sign of any such memory encoding genes in the human genome.

There is a study that claims to have found possible evidence of memory encoding genes, but its methodology is ridiculous, and involved the absurd procedure of looking for weak correlations between a set of data extracted from one group of people and another set of data retrieved from an entirely different group of people. See the end of this post for reasons we can't take the study as good evidence of anything. There is not one single gene that a scientist can point to and say, “I am sure this gene is involved in memory encoding, and I can explain exactly how it works to help translate human knowledge or experience into engrams or memory traces.” But if human memories were actually stored in brains, there would have to be thousands of such genes.

memory encoding

There is an additional general difficulty involved in the idea that brains encode our episodic experiences and learned knowledge as neural states. It is that if the brain did such a thing, it would require a translation facility so marvelous that it would be a “miracle of design,” something that we would never expect to ever appear by unguided evolution.

Yet another problem is that if brains encoded learned knowledge and episodic experiences into engrams or memory traces,  then forming new memories would be slow, and retrieving memories would be slow -- for whenever we formed a new memory, all kinds of translation work and encoding would have to be done (which would take a while), and whenever we retrieved a stored memory, all kinds of decoding work would have to be done (which would take a while).  But instead humans can form memories instantly and retrieve memories instantly, much faster than things would work if all this encoding and decoding work had to be done. 

What has been discussed here is only a small fraction of the very large case for thinking that human cognition and memory must be some psychic or spiritual reality rather than a biological or neural reality.  

Let's imagine a boy who thinks that his mother will give him a pony for Christmas. December 25th comes, and the boy searches everywhere around his house and yard; but there's no pony. On December 28th, having spent several more days looking for such a pony without success, the boy says, "There must be a pony around here somewhere."  We may compare this boy to the modern neuroscientist who believes there are brain engrams that encode our learned knowledge, but who still hasn't found such things, despite decades of searching. He says to himself, "They must be somewhere in the brain." But if they existed, they would have been found long ago.  There is microscopic encoded information in DNA (specifying the amino acids in proteins). Unmistakable evidence of that was discovered around 1950. Why would it possibly be that we would have failed to discover encoded memory information in a brain by the year 2018, if such information actually existed in brains? It would be as easy to find as the genetic information in DNA. 

Thursday, November 22, 2018

Why Most Animal Memory Experiments Tell Us Nothing About Human Memory

Recently the BBC reported a science experiment with the headline “'Memory transplant' achieved in snails.” This was all over the science news on May 14. Scientific American reported it with a headline stating “Memory transferred between snails,” and other sites such as the New York Times site made similar matter-of-fact announcements of a discovery. But you need not think very hard to realize that there's something very fishy about such a story. How could someone possibly get decent evidence about a memory in a snail?

To explain why this story and similar stories do not tell us anything reliable about memory, we should consider the issue of small sample sizes in neuroscience studies. The issue was discussed in a paper in the journal Nature, one entitled Power failure: why small sample size undermines the reliability of neuroscience. The article tells us that neuroscience studies tend to be unreliable because they are using too small a sample size. When there is too small a sample size, there's a too high chance that the effect reported by a study is just a false alarm. An article on this important Nature article states the following:

The group discovered that neuroscience as a field is tremendously underpowered, meaning that most experiments are too small to be likely to find the subtle effects being looked for and the effects that are found are far more likely to be false positives than previously thought. It is likely that many theories that were previously thought to be robust might be far weaker than previously imagined.

I can give a simple example illustrating the problem. Imagine you try to test extrasensory perception (ESP) using a few trials with your friends. You ask them to guess whether you are thinking of a man or a woman. Suppose you try only 10 trials with each friend, and the best result is that one friend guessed correctly 70% of the time. This would be very unconvincing as evidence of anything. There's about a 5 percent chance of getting such a result on any such test, purely by chance; and if you test with five people, you have perhaps 1 chance in 4 that one of them will be able to make 7 such guesses correctly, purely by chance. So having one friend get 7 out of 10 guesses correctly is no real evidence of anything. But if you used a much larger sample size it would be a different situation. For example, if you tried 1000 trials with a friend, and your friend guessed correctly 700 times, that would have a probability of less than 1 in a million. That would be much better evidence.

Now, the problem with many a neuroscience study is that very small sample sizes are being used. Such studies fail to provide convincing evidence for anything. The snail memory test is an example.

The study involved giving shocks to some snails, extracting RNA from their tiny brains, and then injecting that into other snails that had not been shocked. It was reported that such snails had a higher chance of withdrawing into their shells, as if they were afraid and remembered being shocked when they had not. But it might have been that such snails were merely acting randomly, not experiencing any fear memory transferred from the first set of snails. How can you have confidence that mere chance was not involved? You would have to do many trials or use a sample size that guarantees that sufficient trials will occur. This paper states that in order to have moderate confidence in results, getting what is called a statistical power of .8,  there should be at least 15 animals in each group. This statistical power of .8 is a standard for doing good science. 

But judging from the snail paper, the scientists did not do a large number of trials. Judging from the paper, the effect described involved only 7 snails (the number listed on lines 571 -572 of the paper). There is no mention of trying the test more than once on such snails. Such a result is completely unimpressive, and could easily have been achieved by pure chance, without any real “memory transfer” going on. Whether the snail does or does not withdraw into its shell is like a coin flip. It could easily be that by pure chance you might see some number of “into the shell withdrawals” that you interpret as “memory transfer.”

Whether a snail is withdrawing into its shell requires a subjective judgment, where scientists eager to see one result might let their bias influence their judgments about whether the snail withdrew into its shell or not. Also, a snail might withdraw into its shell simply because it has been injected with something, not because it is remembering something. Given such factors and the large chance of a false alarm when dealing with such a small sample size, this “snail memory transfer” experiment offers no compelling evidence for anything like memory transfer. We may also note the idea that RNA is storing long-term memories in animals is entirely implausible, because of RNA's very short lifetime. According to this source, RNA molecules typically last only about two minutes, with 10 to 20 percent lasting between 5 and 10 minutes. And according to this source, if you were to inject RNA into a bloodstream, the RNA molecules would be too large to pass through cell membranes.

The Tonegawa memory research lab at MIT periodically puts out sensational-sounding press releases on its animal experiments with memory. Among the headlines on its site are the following:
  • “Neuroscientists identify two neuron populations that encode happy or fearful memories.”
  • “Scientists identify neurons devoted to social memory.”
  • “Lost memories can be found.”
  • “Researchers find 'lost' memories”
  • “Neuroscientists reverse memories' emotional associations.”
  • “How we recall the past.”
  • “Neuroscientists identify brain circuit necessary for memory formation.”
  • “Neuroscientists plant false memories in the brain.”
  • “Researchers show that memories reside in specific brain cells.”
But when we take a close look at the issue of sample size and statistical power, and the actual experiments that underlie these claims, it seems that few or none of these claims are based on solid, convincing experimental evidence. Although the experiments underlying these claims are very fancy and high-tech, the experimental results seem to involve tiny sample sizes so small that very little of it qualifies as convincing evidence.

A typical experiment goes like this: (1) Some rodents are given electrical shocks; (2) the scientists try to figure out where in the rodent's brain the memory was; (3) the scientists then use an optogenetic switch to “light up” neurons in a similar part of another rodent's brain, one that was not fear trained; (4) a judgment is made on whether the rodent froze when such a thing was done.

Such experiments have the same problems I mentioned above with the snail experiment: the problem of subjective interpretations and alternate explanations. The MIT memory experiments typically involve a judgment of whether a mouse froze. But that may often be a hard judgment to make, particularly in borderline cases. Also, we have no way of telling whether a mouse is freezing because he is remembering something. It could be that the optogenetic zap that the mouse gets is itself sufficient to cause the mouse to freeze, regardless of whether it remembers something. If you're walking along, and someone shoots light or energy into your brain, you might stop merely because of the novel stimulus. A science paper says that it is possible to induce freezing in rodents by stimulating a wide variety of regions. It says, "It is possible to induce freezing by activating a variety of brain areas and projections, including the hippocampus (Liu et al., 2012), lateral, basal and central amygdala (Ciocchi et al., 2010); Johansen et al., 2010; Gore et al., 2015a), periaqueductal gray (Tovote et al., 2016), motor and primary sensory cortices (Kass et al., 2013), prefrontal projections (Rajasethupathy et al., 2015) and retrosplenial cortex (Cowansage et al., 2014).”

But the main problem with such MIT memory experiments is that they involve very small sample sizes, so small that all of the results could easily have happened purely because of chance. Let's look at some sample sizes, remembering that according to this scientific paper, there should be at least 15 animals in each group to have moderate confidence in your results, sufficient to reach the standard of a “statistical power of .8.”.

Let's start with their paper, “Memory retrieval by activating engram cells in mouse models of early Alzheimer’s disease,” which can be accessed from the link above after clicking underneath "Lost memories can be found." The paper states that “No statistical methods were used to predetermine sample size.” That means the authors did not do what they were supposed to have done to make sure their sample size was large enough. When we look at page 8 of the paper, we find that the sample sizes used were merely 8 mice in one group and 9 mice in another group. On page 2 we hear about a group with only 4 mice per group, and on page 4 we hear about a group with only 4 mice per group. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. The study therefore provides no convincing evidence of engram cells.

Another example is this paper by the MIT memory lab, with the grandiose title “Creating a False Memory in the Hippocampus.” When we look at Figure 2 and Figure 3, we see that the sample sizes used were paltry: the different groups of mice had only about 8 or 9 mice per group. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. No convincing evidence has been provided of creating a false memory.

A third example is this paper with the grandiose title “Optogenetic stimulation of a hippocampal engram activates fear memory recall.” Figure 2 tells us that in one of the groups of mice there were only 5 mice, and that in another group there were only 3 mice. Figure 3 tells us that in two other groups of mice there were only 12 mice. Figure 4 tells us that in some other group there was only 5 mice. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. No convincing evidence has been provided of artificially activating a fear memory by the use of optogenetics.

Another example is this paper entitled “Silent memory engrams as the basis for retrograde amnesia.” Figure 1 tells us that the number of mice in particular groups used for the study ranged between 4 and 12. Figures 2 and 3 tell us that the number of mice in particular groups used for the study ranged between 3 and 12. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. Another unsound paper is the 2015 paper "Engram Cells Retain Memory Under Retrograde Amnesia," co-authored by Tonegawa. When we look at the end of the supplemental material, and look at figure s13, we find that the experimenters were using a number of mice that was equal to only 8 in one study group, and 7 in another study group.  Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. 

We see the same "low statistical power" problem in this paper claiming an important experimental result regarding memory. The paper states in its Figure 2 that only 6 mice were used for a study group, and 6 mice for the control group. The same problem is shown in Figure 3 and Figure 4 of the paper.  We see the same  "low statistical power" problem in this paper entitled "Selective Erasure of a Fear Memory." The paper states in its Figure 3 that only 6 to 9 mice were used for a study group, That's only about half of the "15 animals per study group" needed for a modestly reliable result.  The same defect is found in this memory research paper and in this memory research paper. 

The term “engram” means a cell or cells that store memories. Decades after the term was created, we still have no convincing evidence for the existence of engram cells. But memory researchers are shameless in using the term “engram” matter-of-factly even though no convincing evidence of an engram has been produced. So, for example, one of the MIT Lab papers may again and again refer to some cells they are studying as “engram cells,” as if they could try to convince us that such cells are actually engram cells by telling us again and again that they are engram cells. Doing this is rather like some ghost researcher matter-of-factly using the term “ghost blob” to refer to particular patches of infrared light that he is studying after using an infrared camera. Just as a blob of infrared light merely tells us only that some patch of air was slightly colder (not that such a blob is a ghost), a scientist observing a mouse freezing is merely entitled to say he saw a mouse freezing (not that the mouse is recalling a fear memory); and a scientist seeing a snail withdrawing into its shell is merely entitled to tell us that he saw a snail withdrawing into its shell (not that the snail was recalling some fear memory).

The relation between the chance of a false alarm and the statistical power of a study is clarified in this paper by R. M. Christley. The paper has an illuminating graph which I present below with some new captions that are a little more clear than the original captions. We see from this graph that if a study has a statistical power of only about .2, then the chance of the study giving a false alarm is something like 1 in 3 if there is a 50% chance of the effect existing, and much higher (such as 50% or greater) if there is less than a 50% chance of the effect existing. But if a study has a statistical power of only about .8, then the chance of the study giving a false alarm is only about 1 in 20 if there is a 50% chance of the effect existing, and much higher if there is less than a 50% chance of the effect existing. Animal studies using much fewer than 15 animals per study (such as those I have discussed) will result in the relatively high chance of false alarms shown in the green line.

false positive

The PLOS paper here analyzed 410 experiments involving fear conditioning with rodents, a large fraction of them memory experiments. The paper found that such experiments had a “mean normalized effect size” of only .29. An experiment with an effect size of only .29 is very weak, with a high chance of a false alarm. Effect size is discussed in detail here, where we learn that with an effect size of only .3, there's typically something like a 40 percent chance of a false alarm.

To determine whether a sample size is large enough, a scientific paper is supposed to do something called a sample size calculation. The PLOS paper here reported that only one of the 410 memory-related neuroscience papers it studied had such a calculation.  The PLOS paper reported that in order to achieve a moderately convincing effect size of .80, an experiment typically needs to have 15 animals per group; but only 12% of the experiments had that many animals per group. Referring to statistical power (a measure of how likely a result is to be real and not a false alarm), the PLOS paper states, “no correlation was observed between textual descriptions of results and power.” In plain English, that means that there's a whole lot of BS flying around when scientists describe their memory experiments, and that countless cases of very weak evidence have been described by scientists as if they were strong evidence.

Our science media shows very little sign of paying any attention to the statistical power of neuroscience research, partially because rigor is unprofitable. A site can make more money by trumpeting borderline weakly-suggestive research as if it were a demonstration of truth, because the more users click on a sensational-sounding headline, the more money the site make from ads. Our neuroscientists show little sign of paying much attention to whether their studies have a decent statistical power. For the neuroscientist, it's all about publishing as many papers as possible, so it's a better career move to do 5 underpowered small-sample studies (each with a high chance of a false alarm) than a single study with an adequate sample size and high statistical power.

In this post I used an assumption (which I got from one estimate) that 15 research animals per study group are needed for a moderately persuasive result. It seems that this assumption may have been too generous. In her post “Why Most Published Neuroscience Findings Are False,” Kelly Zalocusky PhD calculates (using Ioannidis’s data) that the median effect size of neuroscience studies is about .51. She then states the following, talking about statistical power:

To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous. To bump our power up to 0.5, we would need an n of 31 per group. A power of 0.8 would require 60 per group.

If we describe a power of .5 as being moderately convincing, it therefore seems that 31 animals per study group is needed for a neuroscience study to be moderately convincing. But most experimental neuroscience studies involving rodents and memory use fewer than 15 animals per study group. 

Zalocusky states the following:

If our intuitions about our research are true, fellow graduate students, then fully 70% of published positive findings are “false positives.” This result furthermore assumes no bias, perfect use of statistics, and a complete lack of “many groups” effect. (The “many groups” effect means that many groups might work on the same question. 19 out of 20 find nothing, and the 1 “lucky” group that finds something actually publishes). Meaning—this estimate is likely to be hugely optimistic.

All of these things make it rather clear that a large fraction or most animal memory experiments are dubious.  There is another reason why the great majority of these experiments tell us nothing about human memory.  It is that most such experiments involve rodents, and given the vast differences between men and rodents, nothing reliable about human memory can be determined by studying rodent memory.