Monday, January 7, 2019

Memories Can Form Many Times Faster Than the Speed of Synapse Strengthening

The main theory of a brain storage of memories is that people acquire new memories through a strengthening of synapses. There are many reasons for doubting this claim. One is that information is generally stored through a writing process, not a strengthening process. It seems that there has never been a verified case of any information being stored through a process of strengthening.

If it were true that memories were stored by a strengthening of synapses, this would be a slow process. The only way in which a synapse can be strengthened is if proteins are added to it. We know that the synthesis of new proteins is a rather slow effect, requiring minutes of time. In addition, there would have to be some very complicated encoding going on if a memory was to be stored in synapses. The reality of newly-learned knowledge and new experience would somehow have to be encoded or translated into some brain state that would store this information. When we add up the time needed for this protein synthesis and the time needed for this encoding, we find that the theory of memory storage in brain synapses predicts that the acquisition of new memories should be a very slow affair, which can occur at only a tiny bandwidth, a speed which is like a mere trickle. But experiments show that we can actually acquire new memories at a speed more than 1000 times greater than such a tiny trickle.

One such experiment is the experiment described in the scientific paper “Visual long-term memory has a massive storage capacity for object details.” The experimenters showed some subjects 2500 images over the course of five and a half hours, and the subjects viewed each image for only three seconds. Then the subjects were tested in the following way described by the paper:

Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high  (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images.

In this experiment, pairs like those shown below were used. A subject might be presented for 3 seconds with one of the two images in the pair, and then hours later be shown both images in the pair, and be asked which of the two was the one he saw.



Although the authors probably did not intend for their experiment to be any such thing, their experiment is a great experiment to disprove the prevailing dogma about memory storage in the brain. Let us imagine that memories were being stored in the brain by a process of synapse strengthening. Each time a memory was stored, it would involve the synthesis of new proteins (requiring minutes), and also the additional time (presumably requiring additional minutes) for an encoding effect in which knowledge or experienced was translated into neural states. If the brain stored memories in such a way, it could not possibly keep up with remembering images that appeared for only three seconds each in a long series. It would be a state of affairs like that depicted in what many regard as the funniest scene that appeared in the “I Love Lucy” TV series, the scene in which Lucy and her friend Ethel were working on a confection assembly line. In that scene Lucy and Ethel were supposed to wrap chocolates that were moving along a conveyor belt. But while the chocolates moved slowly at first, the conveyor belt kept speeding up faster and faster, totally exceeding Lucy and Ethel's ability to wrap the chocolates (with ensuing hilarious results).


The experiment described above in effect creates a kind of fast moving conveyor belt in which images fly by at a speed so fast that it should totally defeat a person's ability to memorize accurately – if our memories were actually being created through the slow process imagined by scientists, in which each memory requires a protein synthesis requiring minutes, and an additional time (probably additional minutes) needed for encoding. But nonetheless the subjects did extraordinarily well in this test.

There is only one conclusion we can draw from such an experiment. It is that the bandwidth of human memory acquisition is vastly greatly than anything that can be accounted for by neural theories of memory storage. We do not remember at the speed of synapse strengthening, which is a snail's speed similar to the speed of arm muscle strengthening. We instead are able to form new memories in a manner that is basically instantaneous. The authors of the scientific paper state that their results “pose a challenge to neural models of memory storage and retrieval.” That is an understatement, for we could say that their results are shockingly inconsistent with prevailing dogmas about how memories are stored.

There are some people who are able to acquire new memories at an astonishing rate. The autistic savant Kim Peek was able to recall everything he had read in the more than 7000 books he had read. Here we had a case in which memorization occurred at the speed of reading. Stephen Wiltshire is an autistic savant who has produced incredibly detailed and accurate artistic works depicting cities that he has seen only from a brief helicopter ride or boat ride. Of Wiltshire, savant expert Darold Treffert says, "His extraordinary memory is illustrated in a documentary film clip, when, after a 12-minute helicopter ride over London, he completes, in 3 hours, an impeccably accurate sketch that encompasses 4 square miles, 12 major landmarks and 200 other buildings all drawn to scale and perspective." Again, we have a case in which memories seem to be formed at an incredibly fast rate. Savant Daniel Tammet (who one time publicly recited accurately the value of pi to 22,514 digits) was able to learn the Icelandic language in only 7 days. Derek Paravicini is a blind and brain-damaged autistic savant who has the incredible ability to replay any piece of music he has heard for the first time. In 2007 the Guardian reported the following:

Derek is 27, blind, has severe learning difficulties, cannot dress or feed himself - but play him a song once, and he will not only memorize it instantly, but be able to reproduce it exactly on the piano. One part of his brain is wrecked; another has a capacity most of us can only dream of.

Other savants such as Leslie Lemke and Ellen Boudreaux have the same extraordinary ability to replay perfectly a song heard for the first time. 

Cases such as these are inconsistent with prevailing theories of memory. Are we to believe that such people (typically with substantial brain damage) can somehow synthesize proteins in their brains ten times or thirty times faster than the average human, so that their synapses can get bulked up ten times or thirty times faster? That's hardly credible. But if memories are not actually stored in brains, but stored in or added to a human psychic or spiritual facility, something like a soul, then there would be no reason why the brain-damaged might not have astonishing powers of memorization.

Some people can form memories 1000 times faster than should be possible under prevailing theories of brain memory storage, which involve postulating protein synthesis and encoding operations that should take minutes. This thousand-fold shortfall in only one of three thousand-fold shortfalls of the prevailing theory of brain memory storage. The two other shortfalls are: (1) humans can remember things for 50 years or more, which is 1000 times longer than the synaptic theory of memory storage can account for (synapses having average protein lifetimes of only a few weeks); (2) humans can recall things 1000 times faster than should be possible if you stored something in some exact location of the brain. If you stored a memory in your brain (an organ with no numbering system or coordinate system), it would be like throwing a needle onto a mountain-sized heap of needles, in the sense that finding that exact needle at some later point should take a very long time.

The imaginary conversation below illustrates some of the many ways in which prevailing dogma about brain memory storage fails. It's the kind of conversation that might occur if memories were formed according to the "brain storage of memory" dogmas that currently prevail among neuroscientists. 

Costello: Alright, guy, I'm now going to teach you an important geographical fact: which city is the capital city of Spain.
Abbott: Go ahead, I'm all ears.
Costello: Okay, here it is. The capital city of Spain is Madrid.
Abbott: Okay, I'll try to remember that.
Costello: So what is the capital city of Spain?
Abbott: I haven't formed the memory of that yet. It takes time. I'm still synthesizing the proteins I need to strength my synapses, so I can remember that.
Costello: So try hard. Remember, Madrid is the capital city of Spain.
Abbott: I'm working on forming the memory.
Costello: So do you remember by now what the capital city of Spain is?
Abbott: Don't ask me too soon. It takes minutes to synthesize those proteins.

After five additional minutes like this, the conversation continues.

Costello: Okay, so it's been five minutes since I first told you what the capital city of Spain is. You should have had enough time to have formed your memory of this fact.
Abbott: I'm sure by now I have formed that memory, because there has been enough time for protein synthesis in my synapses.
Costello: So what is the capital city of Spain?
Abbott: I can't recall.
Costello: But you formed the memory by now. Why can't you recall it?
Abbott: The problem is that I don't know exactly where in my brain the memory was stored. So I can't just instantly recall the memory. The memory is like a tiny needle in a haystack. There's no way I can find that quickly.
Costello: Can't you just search through all the memories in your brain, looking for this one?
Abbott: I could try, but it would take hours or days to search through all those memories.
Costello: Sheesh, this is driving me crazy. How about this? I can teach you that Madrid is the capital city of Spain, and when you form the memory, you can tell me the exact tiny spot where your memory was formed. So maybe you'll tell me, “Okay I stored that memory at brain neuron number 273,835,235.” Then I'll just say to you something like, “Please look in your brain at neuron number 273,835,235, and retrieve the memory you stored of what is the capital city of Spain.”
Abbott: That's a brilliant idea!
Costello: Thanks.
Abbott: On second thought, it will never work.
Costello: Why not?
Abbott: Neurons aren't numbered, and the brain has no coordinate system. It's like some vast city in which none of the streets are named, and none of the houses have house numbers. So if I put a memory in one little “house” in the huge brain city, I'll never be able to tell you the exact address of that house.
Costello: So how the hell am I supposed to teach you anything?
Abbott: Beats me. And if I ever learn anything new, I'm sure I won't remember it for more than a few weeks. That's because there's a big problem with those proteins that I will synthesize to store those new memories. They have average lifetimes of only a few weeks.

As long as they cling to “brain storage of memory” dogmas, our neuroscientists will never be able to overcome difficulties such as those mentioned in this conversation.

Thursday, December 20, 2018

The Lack of a Viable Theory of Neural Memory Encoding

If we are to believe in the claim that brains store human memories, we must have a credible account of four things: encoding, neural storage of very old memories, the instantaneous formation of memories, and the instantaneous retrieval of memories. The theory that human memories are stored in the brain fails in regard to each of these things.

There exists no plausible theory as to how a brain could store memories lasting for 50 years, but we know humans can remember many things for that long. The most popular idea of brain memory storage claims that memories are stored in synapses, but the proteins in synapses have an average lifetime of less than two weeks, meaning such a theory falls short by a factor of 1000 when it comes to explaining memories that persist for 50 years. As for memory retrieval, there is no theory explaining how humans could possibly recall instantly things they learned many years ago, and haven't thought about in years. You may hear the name of some obscure historical or cultural figure you learned about decades ago, and haven't heard about or thought about since that time. You may then instantly recall something about that person. But if that memory was stored somewhere in your brain, how could you instantly find the exact little location where that memory was? Doing that (for example, instantly finding a memory in storage spot 834,220 out of 1,200,000) would be like instantly finding a needle in a mountain-sized haystack. If a brain had an indexing system, or a coordinate system, or a neuron numbering system, there might be a faint hope for explaining instantaneous memory retrieval; but the brain has no such things. As for the instantaneous formation of memories, there is no theory that can account for it in a brain. The prevailing theory that memories are stored by synapse strengthening (which would involve protein synthesis requiring minutes) fails to account for memories that humans can form instantly.

When we consider the issue of memory encoding, we find a difficulty as great as the difficulties just discussed. Encoding is supposedly some translation that occurs so that a memory can be physically stored in a brain, so that it might last for years. The problem is that human memories include incredibly diverse types of things, and we have no idea how most of these things could be stored as neural states. Consider only a few of the types of things that can be stored in a human memory:

  • Memories of daily experiences, such as what you were doing on some day
  • Facts you learned in school, such as the fact that Lincoln was shot at Ford's Theater
  • Sequences of numbers such as your social security number
  • Sequences of words, such as the dialog an actor has to recite in a play
  • Sequences of musical notes, such as the notes an opera singer has to sing
  • Abstract concepts that you have learned
  • Memories of particular non-visual sensations such as sounds, food tastes, smells, pain, and physical pleasure
  • Memories of how to do physical things, such as how to ride a bicycle
  • Memories of how you felt at emotional moments of your life
  • Rules and principles, such as “look both ways before crossing the street”
  • Memories of visual information, such as what a particular person's face looks like

How could all of these very different types of information ever be translated into neural states so that a brain could store them?

Our neuroscientists have told us again and again that the brain does such an encoding, but there is no real evidence that any such thing takes place. What we have evidence for is merely evidence that humans remember things. If you are someone who believes that memories are physically stored in brains, then you may claim that memory encoding occurred at such and such a rate whenever you observe people learning something at such and such a rate. But merely observing evidence of learning or memory is not acquiring any actual evidence that encoding has occurred. There remains the possibility that our memories are not stored as neural states, the possibility that our repository of memory is some spiritual or psychic facility that is non-neural and non-biological.

Such a possibility should not seem remote when we consider that there is no workable theory as to how learned knowledge and experiences could be encoded so that they might be stored in a brain. No matter what theory we may create to account for the encoding of learned knowledge and episodic experience so that they can be stored in a brain, such a theory will always end up sounding ridiculous after we examine the theory in detail and consider its requirements and shortcomings. Let's look at some possibilities, and why they fail.

Theory #1: Direct writing of words and images

First, let's consider the simplest theory of encoding we can imagine – that a memory is stored in the brain so that it appears in a neural form pretty much as we see it in our minds. Under this theory, when you memorized some series of words, this would cause a sequence of microscopic little letters to become stored in your brain; and when you experienced some visual experience, this would get stored as some tiny little image in your brain. So, for example, under this theory, if someone memorized the sentence, “There may be green aliens in the center of the galaxy,” then after the person died, some scientist might examine that person's brain with an electron microscope, and actually find some tiny little words in some neurons, words that directly spelled out, “There may be green aliens in the center of the galaxy.” And under this theory, if someone was given a picture of a toy purple pony, and asked to memorize it, then after the person died, a scientist might be able to examine the person's brain under an electron microscope, and the scientist might say, “Aha, I see in his neurons a tiny little image of a toy purple pony.”

This theory may immediately provoke giggles, and it is rather easy to think of some reasons why it does not work. They are these:

  1. If memory worked in such a way, we would surely have already discovered such easily-recognizable memory traces. But no such things have been seen, even though a great deal of human neural tissue has been examined at very high magnification. When we look at brain tissue at the highest magnification, we see no tiny little letters or tiny little images of animals, cars, and persons.
  2. For a brain to be able to write words that we memorized in this type of direct manner, it would seem that the brain would need some very precise write mechanism, capable of forming the exact characters of the alphabet in brain tissue; but no such brain capability is known to exist.
  3. It seems that if such a theory were true, recalling some words would be like reading. But recalling words is almost never like reading, and we don't see in our mind's eye some stream of letters as we recall some words we memorized.
  4. For a brain to be able to read words that we memorized in this type of direct manner, it would seem that the brain would need some very precise reading mechanism, capable of reading the exact characters of the alphabet stored in very tiny letters written in brain tissue; but no such thing is known to exist. We don't have tiny little “micro-eyes” in our brains that might allow us to read tiny microscopic letters stored in our brains.

Theory #2: Brain storage of words and images using some unknown non-binary coding or translation protocol

Now, let's consider a different theory of memory encoding – the idea that instead of directly storing words and images (so that we could directly read the words and directly see the images), the brain uses some type of unknown coding or translation protocols. For example, it could conceivably be that words that we learn are somehow translated into proteins or chemicals or electrical states, using some as-of-yet undiscovered translation scheme.

For example, such a scheme might work a little like this:

Item How the item might be represented
Letter “A” Some particular neural arrangement of atoms, chemicals or electricity
Letter “B” Some other neural arrangement of atoms, chemicals or electricity
Letter “C” Some other neural  arrangement of atoms, chemicals or electricity

Such a scheme might work a little like the Morse code, in which particular letters are translated into some sequence of dots, dashes, or dots and dashes. Some particular arrangement of atoms, chemicals or electricity might work like a dot in the Morse code, and some other particular arrangement of atoms, chemicals or electricity might work like a dash in the Morse code.

Or there could be some higher-level translation system based on particular words rather than letters. For example, we can imagine something like this:


Item How the item might be represented
Word “sun” Some particular neural arrangement of atoms, chemicals, proteins or electricity
Word “man” Some other neural arrangement of atoms, chemicals, proteins or electricity
Word “move” Some other neural  arrangement of atoms, proteins chemicals or electricity

There is one giant problem with such a theory. All of the languages that we use are fairly recent innovations, having been created in only the last few percent of the time that humans have existed. For example, back in the Roman Empire people used Latin, but the English we use today has only been in use for less than 1200 years. The alphabet used for English is less than 1000 years old, and its alphabetic predecessor (the Latin alphabet) is only a few thousand years old. It is generally acknowledged even by Darwinism enthusiasts that very complex evolutionary innovations cannot arise in only a few centuries of time or a few thousand years. So we could never explain how the brain could naturally possess some elaborate translation system based on such a relatively recent innovation as the English language and the English alphabet.

Scientists strain our credulity whenever they talk about novel functional genes accidentally appearing even over the course of a million years. Think, then, on how much greater a problem there would be in explaining how hundreds of novel functional genes could have appeared in less than 3000 years, to perform some translation operation involving characters and words that have existed for less than 3000 years. To assume such a thing would be to assume evolution working thousands of times faster than the rate we would predict from known mutation rates.

There is also no evidence that any such great burst of genetic novelty has occurred. Although the half-life of DNA is 512 years, we have enough samples of human DNA from ancient Rome and ancient Egypt to know that there has been no big change in the DNA of humans during the past 3000 years. So it seems impossible that there could be any genetic capability (arising in the past few thousand years) that would allow humans to neurally store information using some encoding mechanism specifically tailored to the letters and words of the English language that have existed for less than 3000 years.

Another difficulty with the theory of encoding just mentioned is that if it existed, we would see big differences in the genes of people who spoke different languages. According to such a theory, we would expect that Chinese people would have one group of genes corresponding to proteins or RNA molecules needed to translate Chinese words into neural states, and that English speaking people would have some other quite different set of genes corresponding to proteins or RNA molecules needed to translate English words into neural states (particularly since the Chinese language and alphabet is so different from the English language and alphabet). But there exists no such difference in the genes of Chinese speaking people and English speaking people.

There is also the difficulty, discussed more fully in the conclusion of this post, that there is no sign in the human genome that any such genes exist for performing such an elaborate operation of encoding human learned knowledge and episodic experience so that it can be stored in neurons or synapses (and there would need to be many hundreds or thousands of genes dedicated to performing such a task if it was done).

Theory #3: Binary writing of words and images

Now, let's consider a theory of memory encoding that perhaps the words we memorize and the images we remember are stored in binary format. We know that computers store information in binary format, so when it is suggested that the brain may use a similar format, this may sound reasonable to the average person (although it isn't, a brain being radically different from an electronic computer).

This possibility actually has all of the difficulties of the previous possibility. What goes on when your computer stores words in binary format is the following:

  1. First individual letters in the words are converted into decimal numbers (such as 13, 19, and 23) using a particular translation table called the ASCII code.
  2. Then, those numbers are converted from decimal to binary using a decimal-to-binary conversion routine.

So if we are to believe that the brain does binary encoding like a computer, we would need to believe that built into the brain on a low level is some type of translation scheme like the one below, a scheme in which letters are translated into decimal numbers.

ASCII table used by a computer to store encoded information

In addition, we would also have to believe that the brain has some kind of capability to translate the numbers in such a system into binary numbers. Alternately, we could believe that the brain has a scheme for directly translating characters into binary, but the overall complexity of such a translation mechanism would be every bit as great as a system in which characters are converted into decimal, and then into binary.

We have the following difficulties involved with such an idea:

  1. If memory worked in such a way, we would surely have already discovered such easily-recognizable memory traces. We would have discovered tiny little traces in the brain that resemble binary coding. But no such things have been seen, even though a great deal of neural tissue has been examined at very high magnification.
  2. For a brain to be able to write words that were memorized in this type of direct manner, it would seem that the brain would need some very precise write mechanism, capable of writing binary traces; but no such thing is known to exist.
  3. For a brain to be able to read words that we memorized in this type of direct manner, it would seem that the brain would need some very precise reading mechanism, capable of reading in binary; but no such thing is known to exist.
  4. Since the alphabets of human languages are only a few thousand years old, there would have been no time for the human body to have evolved some complex biological mechanism capable of converting specific alphabetic characters to binary.
  5. We can imagine no way in which a brain could achieve the translation effect in which words are translated into binary. As far as we know, there is nothing anything like an ASCII table in your brain, nor is there anything like a facility for translating English letters directly into binary, nor is there anything like a facility for translating English letters into decimal, and then translating decimal numbers into binary. There are no genes in the genome that perform such tasks.

Theory #4: Storage of sensations occurring when something is learned or experienced

Now, let's consider a whole different theory. It could be that instead of storing words that you memorized, a brain might store the sensations that occurred when you memorized something. So imagine I open up a copy of Vogue magazine, and read an ad saying, “You'll feel fresher than the morning dew.” If I memorize that slogan, it might be that my brain is storing the electrical or chemical activity in my brain when I saw that slogan. And if I hear on the radio some advertising slogan of “We'll build your wealth sky-high,” and memorize that, it could be that my brain is storing something corresponding to the electrical and chemical activity in my brain when I heard such a slogan.

One reason for doubting this theory of memory encoding is that perceptions involve large parts of the brain, and it is hard to imagine that sensations involving large fractions of the brain could be stored in a tiny part of the brain. Since our minds store many thousands or millions of visual memories, if we are to believe that memories are stored in brains, we would have to believe that each stored memory uses only a tiny portion of the brain. But my current visual sensations require the involvement of a large fraction of the occipital lobes of the brain – probably many cubic centimeters. But we cannot plausibly imagine that the brain simply dumps the chemical or electrical contents of those cubic centimeters into some memory that took up only a tiny space on my brain, a millionth or less. It would seem, therefore, that if the brain simply dumped your visual and auditory sensations into memory, that it would require a space vastly bigger than itself to store all the memories we have of things we have seen and heard.

Another difficulty in the “memories are sensation dumps” idea is that there is no evidence of any mechanism for copying information from one part of the brain to another. Let's imagine that when a memory forms, the state of your occipital lobes (involved in vision) is copied to some point on the cortex where the memory is stored. That would require some biological functionality for copying the state of one large part of the brain to another part of the brain; but we know of no such functionality.

It is easy to think of another big reason for doubting such a theory. It is that if our brains were to be storing something corresponding to sensations, we would expect that when we remembered words, we would remember something visual, with a characteristic font or color, or something auditory, with a characteristic sound. But while that sometimes happens, in almost all cases it does not work that way.

For example, if I remember the words, “Here's looking at you, kid,” I do remember a very specific audio sound, the distinctive sound of Humphrey Bogart saying that in the movie Casablanca. And if I remember the phrase “Men walk on moon,” I do remember a specific font, the font used in the famous New York Times headline of the Apollo 11 lunar mission. But a very large fraction of my memories do not have specific visual or audio characteristics. For example, if someone asks, “What is the lightest particle in an atom?” I may reply, “The electron is the lightest particle in the atom.” But that memory I have recalled does not have any specific sound or sight associated with it. I don't hear the answer in someone's voice, and I do not see the answer as words in any particular font or color. And if someone asks me, “What is your birthday?” I will remember a particular day. But I will not see in my mind's eye some date written in a particular font and having some particular color, nor will I hear in my mind's ear some particular type of voice stating the answer.

For almost all of my knowledge memories, the same thing is true: when I recall the memory, I don't see something that appears with some particular visual appearance, nor do I hear something that has some particular sound. It seems this would not be the case if the brain was storing my memories by storing visual and auditory sensations.

It is also true that I can memorize something that does not correspond to any particular visual or auditory sensation I had. For example, I can visualize some imaginary thing such as a giant purple elephant. If I think about this imaginary thing enough times, it will become a permanent memory. But this imaginary thing I have memorized does not correspond to any sensation I had. In this case I never saw a giant purple elephant. So it cannot be that my memory of the giant purple elephant was formed from sensations that I had of such a thing. Similarly, a fiction writer can dream up on Tuesday an idea for a short story, and then write that short story on Wednesday, using the memory he formed on Tuesday. But the memory will not correspond to any visual or auditory sensations he had.

It seems that we therefore cannot explain memories as merely being a storage of sensations that we had at some time when we learned something. You learned many thousands of things in school, but when you remember such knowledge, you virtually never remember the sight and sound of your school teacher teaching you such things or the sight of you reading a book telling you such things (as you would if your memory of learned knowledge was just a dump of the sensations you had when you learned such things).

Conclusion

I have reviewed some of the theories that could be used to account for the encoding of learned information as neural states. There are strong reasons for rejecting each such theory. It seems it is impossible to present a specific theory of memory encoding that stands up to scrutiny as a reasonable possibility. 

How is it that neuroscientists sidestep this difficulty? They simply avoid presenting specific theories of how a brain could translate learned information into neural states. An example is the wikipedia.org article on “Memory (encoding).” The article tells us, “Encoding allows the perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from short-term or long-term memory.”
But the article fails to discuss any specific theory discussing how such a conversion would work. The article has all kinds of digressions and tangential information, but nowhere does it advance a single specific idea of how learned information (such as a learned sequence of words) could be translated into neural states when a memory was stored in a brain. An article on a memory experiment states, "Press a scientist to tell you how memories are encoded and decoded in the brain, and you’ll soon find that the scientific community doesn’t have an answer."  In his book Crimes of Reason: On Mind, Nature, and the Paranormal, philosopher Stephen Braude says on page 19 that neuroscience "never addresses the fundamental issues of how any physical modification can represent or stand for what is remembered." 

A speculative neuroscience paper confesses that "codifying memories is one of the fundamental problems of modern Neuroscience," but that "the functional mechanisms behind this phenomenon remain largely unknown."  It would have been more accurate to have stated "entirely unknown." 

Just as it is impossible to advance a credible detailed theory as to how Santa Claus could distribute a toy to every good little boy and girl in the world in a single 24 hours, it is impossible to advance a credible detailed theory of how learned knowledge and episodic experience could be encoded and permanently stored as neural states. If memories were to be encoded so that they could be stored in brains, there would be two major “footprints” of such a thing, physical traces showing that it was going on. They are the following:

High repetition of representational building blocks. Whenever encoded information is stored, there is a repetition of two or more things we can call representational building blocks or representational atoms. In binary encoding these representational building blocks are the 1's and 0's or their electromagnetic equivalent. In alphabetic encoding, the representational building blocks are the letters of the alphabetic. In DNA the representational building blocks are the four types of nucleotide base pairs that are repeated over and again. In Morse code, the representational building blocks are dots and dashes. Even if you don't know the system of encoding that was used, it is easy to detect that encoded  information is present, by seeing a high repetition of representational building blocks. If brains stored memories encoded into neural states, we would see a gigantic degree of repetition of some type of representational building blocks.
Genes dedicated to memory encoding. If human brains were to actually be translating thoughts and sensory experiences so that they can be stored as memory traces, such a gigantic job would require a huge number of genes – many times more than the 500 or so genes that are used for the very simple encoding job of translating DNA nucleotide base pairs into amino acids.

There is no sign at all of either of these things in the brain. We absolutely do not see anything like very highly repeated representational building blocks in the brain that might be the footprints of encoded memory information. And we see no sign of any such memory encoding genes in the human genome.

There is a study that claims to have found possible evidence of memory encoding genes, but its methodology is ridiculous, and involved the absurd procedure of looking for weak correlations between a set of data extracted from one group of people and another set of data retrieved from an entirely different group of people. See the end of this post for reasons we can't take the study as good evidence of anything. There is not one single gene that a scientist can point to and say, “I am sure this gene is involved in memory encoding, and I can explain exactly how it works to help translate human knowledge or experience into engrams or memory traces.” But if human memories were actually stored in brains, there would have to be thousands of such genes.

memory encoding

There is an additional general difficulty involved in the idea that brains encode our episodic experiences and learned knowledge as neural states. It is that if the brain did such a thing, it would require a translation facility so marvelous that it would be a “miracle of design,” something that we would never expect to ever appear by unguided evolution.

Yet another problem is that if brains encoded learned knowledge and episodic experiences into engrams or memory traces,  then forming new memories would be slow, and retrieving memories would be slow -- for whenever we formed a new memory, all kinds of translation work and encoding would have to be done (which would take a while), and whenever we retrieved a stored memory, all kinds of decoding work would have to be done (which would take a while).  But instead humans can form memories instantly and retrieve memories instantly, much faster than things would work if all this encoding and decoding work had to be done. 

What has been discussed here is only a small fraction of the very large case for thinking that human cognition and memory must be some psychic or spiritual reality rather than a biological or neural reality.  

Let's imagine a boy who thinks that his mother will give him a pony for Christmas. December 25th comes, and the boy searches everywhere around his house and yard; but there's no pony. On December 28th, having spent several more days looking for such a pony without success, the boy says, "There must be a pony around here somewhere."  We may compare this boy to the modern neuroscientist who believes there are brain engrams that encode our learned knowledge, but who still hasn't found such things, despite decades of searching. He says to himself, "They must be somewhere in the brain." But if they existed, they would have been found long ago.  There is microscopic encoded information in DNA (specifying the amino acids in proteins). Unmistakable evidence of that was discovered around 1950. Why would it possibly be that we would have failed to discover encoded memory information in a brain by the year 2018, if such information actually existed in brains? It would be as easy to find as the genetic information in DNA. 

Thursday, November 22, 2018

Why Most Animal Memory Experiments Tell Us Nothing About Human Memory

Recently the BBC reported a science experiment with the headline “'Memory transplant' achieved in snails.” This was all over the science news on May 14. Scientific American reported it with a headline stating “Memory transferred between snails,” and other sites such as the New York Times site made similar matter-of-fact announcements of a discovery. But you need not think very hard to realize that there's something very fishy about such a story. How could someone possibly get decent evidence about a memory in a snail?

To explain why this story and similar stories do not tell us anything reliable about memory, we should consider the issue of small sample sizes in neuroscience studies. The issue was discussed in a paper in the journal Nature, one entitled Power failure: why small sample size undermines the reliability of neuroscience. The article tells us that neuroscience studies tend to be unreliable because they are using too small a sample size. When there is too small a sample size, there's a too high chance that the effect reported by a study is just a false alarm. An article on this important Nature article states the following:

The group discovered that neuroscience as a field is tremendously underpowered, meaning that most experiments are too small to be likely to find the subtle effects being looked for and the effects that are found are far more likely to be false positives than previously thought. It is likely that many theories that were previously thought to be robust might be far weaker than previously imagined.

I can give a simple example illustrating the problem. Imagine you try to test extrasensory perception (ESP) using a few trials with your friends. You ask them to guess whether you are thinking of a man or a woman. Suppose you try only 10 trials with each friend, and the best result is that one friend guessed correctly 70% of the time. This would be very unconvincing as evidence of anything. There's about a 5 percent chance of getting such a result on any such test, purely by chance; and if you test with five people, you have perhaps 1 chance in 4 that one of them will be able to make 7 such guesses correctly, purely by chance. So having one friend get 7 out of 10 guesses correctly is no real evidence of anything. But if you used a much larger sample size it would be a different situation. For example, if you tried 1000 trials with a friend, and your friend guessed correctly 700 times, that would have a probability of less than 1 in a million. That would be much better evidence.

Now, the problem with many a neuroscience study is that very small sample sizes are being used. Such studies fail to provide convincing evidence for anything. The snail memory test is an example.

The study involved giving shocks to some snails, extracting RNA from their tiny brains, and then injecting that into other snails that had not been shocked. It was reported that such snails had a higher chance of withdrawing into their shells, as if they were afraid and remembered being shocked when they had not. But it might have been that such snails were merely acting randomly, not experiencing any fear memory transferred from the first set of snails. How can you have confidence that mere chance was not involved? You would have to do many trials or use a sample size that guarantees that sufficient trials will occur. This paper states that in order to have moderate confidence in results, getting what is called a statistical power of .8,  there should be at least 15 animals in each group. This statistical power of .8 is a standard for doing good science. 

But judging from the snail paper, the scientists did not do a large number of trials. Judging from the paper, the effect described involved only 7 snails (the number listed on lines 571 -572 of the paper). There is no mention of trying the test more than once on such snails. Such a result is completely unimpressive, and could easily have been achieved by pure chance, without any real “memory transfer” going on. Whether the snail does or does not withdraw into its shell is like a coin flip. It could easily be that by pure chance you might see some number of “into the shell withdrawals” that you interpret as “memory transfer.”

Whether a snail is withdrawing into its shell requires a subjective judgment, where scientists eager to see one result might let their bias influence their judgments about whether the snail withdrew into its shell or not. Also, a snail might withdraw into its shell simply because it has been injected with something, not because it is remembering something. Given such factors and the large chance of a false alarm when dealing with such a small sample size, this “snail memory transfer” experiment offers no compelling evidence for anything like memory transfer. We may also note the idea that RNA is storing long-term memories in animals is entirely implausible, because of RNA's very short lifetime. According to this source, RNA molecules typically last only about two minutes, with 10 to 20 percent lasting between 5 and 10 minutes. And according to this source, if you were to inject RNA into a bloodstream, the RNA molecules would be too large to pass through cell membranes.

The Tonegawa memory research lab at MIT periodically puts out sensational-sounding press releases on its animal experiments with memory. Among the headlines on its site are the following:
  • “Neuroscientists identify two neuron populations that encode happy or fearful memories.”
  • “Scientists identify neurons devoted to social memory.”
  • “Lost memories can be found.”
  • “Researchers find 'lost' memories”
  • “Neuroscientists reverse memories' emotional associations.”
  • “How we recall the past.”
  • “Neuroscientists identify brain circuit necessary for memory formation.”
  • “Neuroscientists plant false memories in the brain.”
  • “Researchers show that memories reside in specific brain cells.”
But when we take a close look at the issue of sample size and statistical power, and the actual experiments that underlie these claims, it seems that few or none of these claims are based on solid, convincing experimental evidence. Although the experiments underlying these claims are very fancy and high-tech, the experimental results seem to involve tiny sample sizes so small that very little of it qualifies as convincing evidence.

A typical experiment goes like this: (1) Some rodents are given electrical shocks; (2) the scientists try to figure out where in the rodent's brain the memory was; (3) the scientists then use an optogenetic switch to “light up” neurons in a similar part of another rodent's brain, one that was not fear trained; (4) a judgment is made on whether the rodent froze when such a thing was done.

Such experiments have the same problems I mentioned above with the snail experiment: the problem of subjective interpretations and alternate explanations. The MIT memory experiments typically involve a judgment of whether a mouse froze. But that may often be a hard judgment to make, particularly in borderline cases. Also, we have no way of telling whether a mouse is freezing because he is remembering something. It could be that the optogenetic zap that the mouse gets is itself sufficient to cause the mouse to freeze, regardless of whether it remembers something. If you're walking along, and someone shoots light or energy into your brain, you might stop merely because of the novel stimulus. A science paper says that it is possible to induce freezing in rodents by stimulating a wide variety of regions. It says, "It is possible to induce freezing by activating a variety of brain areas and projections, including the hippocampus (Liu et al., 2012), lateral, basal and central amygdala (Ciocchi et al., 2010); Johansen et al., 2010; Gore et al., 2015a), periaqueductal gray (Tovote et al., 2016), motor and primary sensory cortices (Kass et al., 2013), prefrontal projections (Rajasethupathy et al., 2015) and retrosplenial cortex (Cowansage et al., 2014).”

But the main problem with such MIT memory experiments is that they involve very small sample sizes, so small that all of the results could easily have happened purely because of chance. Let's look at some sample sizes, remembering that according to this scientific paper, there should be at least 15 animals in each group to have moderate confidence in your results, sufficient to reach the standard of a “statistical power of .8.”.

Let's start with their paper, “Memory retrieval by activating engram cells in mouse models of early Alzheimer’s disease,” which can be accessed from the link above after clicking underneath "Lost memories can be found." The paper states that “No statistical methods were used to predetermine sample size.” That means the authors did not do what they were supposed to have done to make sure their sample size was large enough. When we look at page 8 of the paper, we find that the sample sizes used were merely 8 mice in one group and 9 mice in another group. On page 2 we hear about a group with only 4 mice per group, and on page 4 we hear about a group with only 4 mice per group. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. The study therefore provides no convincing evidence of engram cells.

Another example is this paper by the MIT memory lab, with the grandiose title “Creating a False Memory in the Hippocampus.” When we look at Figure 2 and Figure 3, we see that the sample sizes used were paltry: the different groups of mice had only about 8 or 9 mice per group. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. No convincing evidence has been provided of creating a false memory.

A third example is this paper with the grandiose title “Optogenetic stimulation of a hippocampal engram activates fear memory recall.” Figure 2 tells us that in one of the groups of mice there were only 5 mice, and that in another group there were only 3 mice. Figure 3 tells us that in two other groups of mice there were only 12 mice. Figure 4 tells us that in some other group there was only 5 mice. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. No convincing evidence has been provided of artificially activating a fear memory by the use of optogenetics.

Another example is this paper entitled “Silent memory engrams as the basis for retrograde amnesia.” Figure 1 tells us that the number of mice in particular groups used for the study ranged between 4 and 12. Figures 2 and 3 tell us that the number of mice in particular groups used for the study ranged between 3 and 12. Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. Another unsound paper is the 2015 paper "Engram Cells Retain Memory Under Retrograde Amnesia," co-authored by Tonegawa. When we look at the end of the supplemental material, and look at figure s13, we find that the experimenters were using a number of mice that was equal to only 8 in one study group, and 7 in another study group.  Such a paltry sample size does not result in any decent statistical power, and the results cannot be trusted, since they very easily could be false alarms. 

We see the same "low statistical power" problem in this paper claiming an important experimental result regarding memory. The paper states in its Figure 2 that only 6 mice were used for a study group, and 6 mice for the control group. The same problem is shown in Figure 3 and Figure 4 of the paper.  We see the same  "low statistical power" problem in this paper entitled "Selective Erasure of a Fear Memory." The paper states in its Figure 3 that only 6 to 9 mice were used for a study group, That's only about half of the "15 animals per study group" needed for a modestly reliable result.  The same defect is found in this memory research paper and in this memory research paper. 

The term “engram” means a cell or cells that store memories. Decades after the term was created, we still have no convincing evidence for the existence of engram cells. But memory researchers are shameless in using the term “engram” matter-of-factly even though no convincing evidence of an engram has been produced. So, for example, one of the MIT Lab papers may again and again refer to some cells they are studying as “engram cells,” as if they could try to convince us that such cells are actually engram cells by telling us again and again that they are engram cells. Doing this is rather like some ghost researcher matter-of-factly using the term “ghost blob” to refer to particular patches of infrared light that he is studying after using an infrared camera. Just as a blob of infrared light merely tells us only that some patch of air was slightly colder (not that such a blob is a ghost), a scientist observing a mouse freezing is merely entitled to say he saw a mouse freezing (not that the mouse is recalling a fear memory); and a scientist seeing a snail withdrawing into its shell is merely entitled to tell us that he saw a snail withdrawing into its shell (not that the snail was recalling some fear memory).

The relation between the chance of a false alarm and the statistical power of a study is clarified in this paper by R. M. Christley. The paper has an illuminating graph which I present below with some new captions that are a little more clear than the original captions. We see from this graph that if a study has a statistical power of only about .2, then the chance of the study giving a false alarm is something like 1 in 3 if there is a 50% chance of the effect existing, and much higher (such as 50% or greater) if there is less than a 50% chance of the effect existing. But if a study has a statistical power of only about .8, then the chance of the study giving a false alarm is only about 1 in 20 if there is a 50% chance of the effect existing, and much higher if there is less than a 50% chance of the effect existing. Animal studies using much fewer than 15 animals per study (such as those I have discussed) will result in the relatively high chance of false alarms shown in the green line.

false positive

The PLOS paper here analyzed 410 experiments involving fear conditioning with rodents, a large fraction of them memory experiments. The paper found that such experiments had a “mean normalized effect size” of only .29. An experiment with an effect size of only .29 is very weak, with a high chance of a false alarm. Effect size is discussed in detail here, where we learn that with an effect size of only .3, there's typically something like a 40 percent chance of a false alarm.

To determine whether a sample size is large enough, a scientific paper is supposed to do something called a sample size calculation. The PLOS paper here reported that only one of the 410 memory-related neuroscience papers it studied had such a calculation.  The PLOS paper reported that in order to achieve a moderately convincing effect size of .80, an experiment typically needs to have 15 animals per group; but only 12% of the experiments had that many animals per group. Referring to statistical power (a measure of how likely a result is to be real and not a false alarm), the PLOS paper states, “no correlation was observed between textual descriptions of results and power.” In plain English, that means that there's a whole lot of BS flying around when scientists describe their memory experiments, and that countless cases of very weak evidence have been described by scientists as if they were strong evidence.

Our science media shows very little sign of paying any attention to the statistical power of neuroscience research, partially because rigor is unprofitable. A site can make more money by trumpeting borderline weakly-suggestive research as if it were a demonstration of truth, because the more users click on a sensational-sounding headline, the more money the site make from ads. Our neuroscientists show little sign of paying much attention to whether their studies have a decent statistical power. For the neuroscientist, it's all about publishing as many papers as possible, so it's a better career move to do 5 underpowered small-sample studies (each with a high chance of a false alarm) than a single study with an adequate sample size and high statistical power.

In this post I used an assumption (which I got from one estimate) that 15 research animals per study group are needed for a moderately persuasive result. It seems that this assumption may have been too generous. In her post “Why Most Published Neuroscience Findings Are False,” Kelly Zalocusky PhD calculates (using Ioannidis’s data) that the median effect size of neuroscience studies is about .51. She then states the following, talking about statistical power:

To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous. To bump our power up to 0.5, we would need an n of 31 per group. A power of 0.8 would require 60 per group.

If we describe a power of .5 as being moderately convincing, it therefore seems that 31 animals per study group is needed for a neuroscience study to be moderately convincing. But most experimental neuroscience studies involving rodents and memory use fewer than 15 animals per study group. 

Zalocusky states the following:

If our intuitions about our research are true, fellow graduate students, then fully 70% of published positive findings are “false positives.” This result furthermore assumes no bias, perfect use of statistics, and a complete lack of “many groups” effect. (The “many groups” effect means that many groups might work on the same question. 19 out of 20 find nothing, and the 1 “lucky” group that finds something actually publishes). Meaning—this estimate is likely to be hugely optimistic.

All of these things make it rather clear that a large fraction or most animal memory experiments are dubious.  There is another reason why the great majority of these experiments tell us nothing about human memory.  It is that most such experiments involve rodents, and given the vast differences between men and rodents, nothing reliable about human memory can be determined by studying rodent memory. 

Sunday, November 18, 2018

Brain Dogmas Versus Case Histories That Refute Them

Our neuroscientists have the bad habit of frequently spouting dogmas that have not been established by observations. We have all heard these dogmas stated hundreds of times, such as when neuroscientists claim that memories are stored in brains, and that our minds are produced by our brains. There are actually many observations and facts that contradict such dogmas, such as the fact that many people report their minds and memories still working during a near death experience in which their brains shut down electrically (as the brain does soon after cardiac arrest).

One of the most dramatic type of observations conflicting with neuroscience dogmas is the fact that memory and intelligence is well-preserved after the operation called hemispherectomy. Hemispherectomy is the surgical removal of half of the brain. It is performed on children who suffer from severe and frequent epileptic seizures.

In a scientific paper “Discrepancy Between Cerebral Structure and Cognitive Functioning,” we are told that when half of their brains are removed in these operations, “most patients, even adults, do not seem to lose their long-term memory such as episodic (autobiographic) memories.” The paper tells us that Dandy, Bell and Karnosh “stated that their patient's memory seemed unimpaired after hemispherectomy,” the removal of half of their brains. We are also told that Vining and others “were surprised by the apparent retention of memory after the removal of the left or the right hemisphere of their patients.”

On page 59 of the book The Biological Mind, the author states the following:

A group of surgeons at Johns Hopkins Medical School performed fifty-eight hemispherectomy operations on children over a thirty-year period. "We were awed," they wrote later of their experiences, "by the apparent retention of memory after removal of half of the brain, either half, and by the retention of the child's personality and sense of humor." 

In the paper "Neurocognitive outcome after pediatric epilepsy surgery" by Elisabeth M. S. Sherman, we have some discussion of the effects on children of temporal lobectomy (removal of the temporal lobe of the brain) and hemispherectomy, surgically removing half of the brain to stop seizures. We are told this:

After temporal lobectomy, children show few changes in verbal or nonverbal intelligence....Cognitive levels in many children do not appear to be altered significantly by hemispherectomy. Several researchers have also noted increases in the intellectual functioning of some children following this procedure....Explanations for the lack of decline in intellectual function following hemispherectomy have not been well elucidated. 

Referring to a study by Gilliam, the paper states that of 21 children who had parts of their brains removed to treat epilepsy, including 10 who had surgery to remove part of the frontal lobe, "none of the patients with extra-temporal resections had reductions in IQ post-operatively," and that two of the children with frontal lobe resections had "an increase in IQ greater than 10 points following surgery." 

The paper here gives precise before and after IQ scores for more than 50 children who had half of their brains removed in a hemispherectomy operation in the United States.  For one set of 31 patients, the IQ went down by an average of only 5 points. For another set of 15 patients, the IQ went down less than 1 point. For another set of 7 patients the IQ went up by 6 points. 

The paper here (in Figure 4) describes IQ outcomes for 41 children who had half of their brains removed in hemispherectomy operations in Freiburg, Germany. For the vast majority of children, the IQ was about the same after the operation. The number of children who had increased IQs after the operation was greater than the number who had decreased IQs. 

Referring to these kind of surgeries to remove huge amounts of brain tissue, the
paper “Verbal memory after epilepsy surgery in childhood” states, “Group-wise, average normed scores on verbal memory tests were higher after epilepsy surgery than before, corroborating earlier reports.” 

Some try to explain these results as some kind of special ability of the child brain to recover. But there are similar results even for adult patients. The page here mentions 41 adult patients who had a hemispherectomy. It says, “Forty-one patients underwent additional formal IQ testing postsurgery, and the investigators observed overall stability or improvement in these patients,” and notes that “significant functional impairment has been rare.”

Of these cases of successful hemispherectomy, perhaps none is more astonishing than a case of a boy named Alex who did not start speaking until the left half of his brain was removed. A scientific paper describing the case says that Alex “failed to develop speech throughout early boyhood.” He could apparently say only one word (“mumma”) before his operation to cure epilepsy seizures. But then following a hemispherectomy (also called a hemidecortication) in which half of his brain was removed at age 8.5, “and withdrawal of anticonvulsants when he was more than 9 years old, Alex suddenly began to acquire speech.” We are told, “His most recent scores on tests of receptive and expressive language place him at an age equivalent of 8–10 years,” and that by age 10 he could “converse with copious and appropriate speech, involving some fairly long words.” Astonishingly, the boy who could not speak with a full brain could speak well after half of his brain was removed. The half of the brain removed was the left half – the very half that scientists tell us is the half that has more to do with language than the right half. 

We learn of quite a few such cases in the scientific paper "Long-Term Memory: Scaling of Information to Brain Size" by Donald R. Forsdyke of Queens University in Canada.  He quotes the physician John Lorber on an honors student with an IQ of 126:

Instead of the normal 4.5 centimetre thickness of brain tissue between the ventricles and the cortical surface, there was just a thin layer of mantle measuring a millimeter or so. The cranium is filled mainly with cerebrospinal fluid. … I can’t say whether the mathematics student has a brain weighing 50 grams or 150 grams, but it’s clear that it is nowhere near the normal 1.5 kilograms.

Forsdyke notes two similar cases in more recent years, one from France and another from Brazil.  

Cases like this make a mockery of scientist claims to understand the human brain. When scientists discuss scientific knowledge relating to memory, they almost never discuss the most relevant thing they could discuss, the cases of high brain function after hemispherectomy operations in which half of the brain is removed. Instead the scientists cherry-pick information, and describe a few experiments and facts carefully selected to support their dogmas, such as the dogma that brains store memories, and brains make minds. They also fail to discuss the extremely relevant research of John Lorber, who documented many cases of high-functioning humans who had lost almost all of their brain due to hydroencephaly. 


cherry picking

A scientist discussing memory will typically refer us to experiments involving rodents. Such experiments are almost always studies with low statistical power, because the experimenter failed to use at least 15 animals per study group, the minimum needed for a moderately reliable result with a low risk of a false alarm. There will be typically be some graph showing some measurement of what is called freezing behavior, when a rodent stops moving. The experimenter will claim that this shows something was going on in regard to memory, although it probably does not show such a thing, because all measurements of a rodent's degree of freezing are subjective judgments in which an experimenter's bias might have influenced things. There will often be claims that a fear memory was regenerated by electrically zapping some part of the brain where the experimenter thought the memory was stored. Such claims have little force because it is known that there are many parts of a rodent's brain that will cause a rodent to stop moving when such parts are electrically stimulated. And, of course, rodent experiments prove nothing about human memory, because humans are not rodents.

When a scientist discusses memory research, he will typically discuss the case of patient HM, a patient who was bad at forming new memories after damage to the tiny brain region called the hippocampus. Again and again, writers will speak as if this proves the hippocampus is crucial to memory. It certainly does not. The same very rare effect of having a problem in forming new memories cropped up (as reported here) in a man who underwent a dental operation (a root canal). The man had no brain damage, but after the root canal he was reportedly unable to form new memories. Such cases are baffling, and the fact that they can come up with or without brain damage tells us no clear tale about whether the hippocampus is crucial for memory. The hemispherectomy cases suggest that the hippocampus is not crucial for memory, for each patient who had a hemispherectomy lost one of their two hippocampuses, and overall there was little permanent effect on the ability to form new memories.

A scientific paper tells us that “lesions of the rodent hippocampus do not produce reliable anterograde amnesia for context fear,” meaning rodents with a damaged hippocampus can still produce new memories. The paper also tells us, “These data suggest that the hippocampus is normally involved in context conditioning but is not necessary for learning to occur.” So it seems that the main claim that neuroscientists cite to persuade us that they have some understanding of a neural basis for memory (the claim that the hippocampus is “crucial” for memory) is really a factoid that is not actually well established. 

Postscript: The case of patient HM has been cited innumerable times by those eager to suggest that memories are brain based. Such persons usually tell us that patient HM was someone unable to form any new memories.  But a 14-year follow-up study of patient HM (whose memory problems started in 1953) actually tells us that HM was able to form some new memories. The study says this on page 217:

In February 1968, when shown the head on a Kennedy half-dollar, he said, correctly, that the person portrayed on the coin was President Kennedy. When asked him whether President Kennedy was dead or alive, and he answered, without hesitation, that Kennedy had been assassinated...In a similar way, he recalled various other public events, such as the death of Pope John (soon after the event), and recognized the name of one of the astronauts, but his performance in these respects was quite variable. 

The study also says that patient HM was able to learn a maze (although learning it only very slowly),  and was able eventually to walk the maze three times in a row without error.