Friday, December 31, 2021

NSF Grant Tool Query Suggests Engrams Are Not Really Science

An engram is a hypothetical spot in the brain where there is alleged to be a memory trace, an alteration in brain matter caused by the storage of memory. While scientists have claimed that there are countless engrams in your head, the notion of an engram has no robust scientific evidence behind it. No robust evidence for engrams has been found in any organism. Every study that has claimed to provide evidence for the existence of an engram has had problems that should cause us to doubt that good evidence for engrams was provided. 

In a previous post I pointed out the not-really-science status of engrams by doing some queries on major preprint servers that store millions of scientific papers, servers such as the physics paper preprint server (which includes quanitative biology papers), the biology preprint server and the psychology preprint server.  The queries (searching for papers that used the word "engram") showed only the faintest trace of scientific papers mentioning "engrams" in their titles.  Only a handful of papers used that word in their title. An examination of such papers (discussed in my post) showed they provided nothing like any substantial evidence for the existence of any such thing as an engram. 

There is another way of testing whether this concept of engrams has any real observational support. We can use the grant search tool of the National Science Foundation. The National Science Foundation is a US institution that doles out billions of dollars each year in grants for scientific research.  You can use the NSF's grant query tool to find out how much research money is being allocated to research particular topics. 

You can perform the search by using the URL below:

https://www.nsf.gov/awardsearch/simpleSearchResult?queryText=engram

The results we get are the results shown below.  We get only 3 matches. The last match is some climate paper having nothing to do with memory.  So our search has produced only two National Science Foundation grants relating to the topic of engrams.

Engram query

The second project ("Functional Dissociation Within the Hippocampal Formation: Learning and Memory") completed in 1992. Clicking on the link to the project, we see that $163,000 was spent, but no scientific papers are listed as resulting from the project. 

The first grant is a grant of $996,778.00 (nearly one million dollars) that was given to a project entitled "Dendritic spine mechano-biology and the process of memory formation." The project started in 2017 and has a listed end date of July, 2022.  The project description gives us a statement of speculative dogma regarding memory storage.  There are actually very good reasons why the speculations cannot be correct. Below is the statement from the project description:

"The initiation of learning begins with changes at neuronal synapses that can strengthen (or weaken) the response of the synapse. This process is termed synaptic plasticity. Stimuli that produce learning lead to structural changes of the post-synaptic dendritic spine. The initial events of memory and learning include a temporary rise in calcium concentrations and activation of a protein called calmodulin. The next step is activation of calmodulin-dependent enzyme, kinase II (CaMKII). At the same time, structural rearrangements occur in the actin cytoskeleton leading to an enlargement of the spine compartment. How these initial events lead to remodeling of the actin cytoskeleton is largely unknown. This project focuses on the events that lead to the changes in actin cytoskeleton. The research also addresses the question of how these structural changes in the actin cytoskeleton are used to maintain memory."

To see why the main parts of the statement are not well-founded in observations, let us consider dendritic spines. A dendritic spine is a tiny protrusion from one of the dendrites of a neuron. The diagram below shows a neuron in the top half of the diagram. Some dendritic spines are shown in the bottom half of the visual. The bottom half of the visual is a closeup of the red-circled part in the top of the diagram. 


An individual neuron in the brain may have about a thousand such dendritic spines. The total number of dendritic spines in the brain has been estimated at 100 trillion, which is about a thousand times greater than the number of neurons in the brain.  The total number of synapses in the brain has also been estimated at 100 trillion. A large fraction of synapses are connected to dendritic spines. 


Now, given such a high number of dendritic spines and synapses, we have the interesting situation that there is no possibility of correlating the learning of something and a strengthening of synapses or a strengthening or enlarging or growth of dendritic spines. Even if we are testing only a mouse, we still have an animal with trillions of dendritic spines and trillions of synapses. Scientists are absolutely unable to measure the size, strength or growth of all of those dendritic spines and synapses.  The technology for doing that simply does not exist.  What scientists can do is inspect a very small number of dendritic spines, taking snapshots of their physical state.  But no such inspection would ever allow you to conclude that one or more dendritic spines had increased in size or grown or strengthened because some learning had occurred. Since dendritic spines slowly increase and decrease in size in an apparently random fashion, there is no way to tell whether the increase or decrease of a dendritic spine (or a small number of such spines) is being caused by learning or by the formation of a memory. 

Therefore the statements below (quoted above) cannot be well-founded:

"The initiation of learning begins with changes at neuronal synapses that can strengthen (or weaken) the response of the synapse. This process is termed synaptic plasticity. Stimuli that produce learning lead to structural changes of the post-synaptic dendritic spine."

In fact, we know of the strongest reason why the hypothesis underlying such a claim cannot be true. The reason is that human memories are often extremely stable and long lasting, while dendritic spines and synapses are unstable, fluctuating things that have typical lifetimes of a few months or weeks.  Read here to find some papers supporting such a claim.  I can quote some scientists (Emilio Bizzi and Robert Ajemian) on this topic:

"If we believe that memories are made of patterns of synaptic connections sculpted by experience, and if we know, behaviorally, that motor memories last a lifetime, then how can we explain the fact that individual synaptic spines are constantly turning over and that aggregate synaptic strengths are constantly fluctuating? How can the memories outlast their putative constitutive components?"

The word "outlast" is a huge understatement here, for the fact is that human memories such as 50-year-old memories last very many times longer than the maximum lifetime of dendritic spines and synapses, and such memories last 1000 times longer than the protein molecules that make up such spines and synapses (which have average lifetimes of only a few weeks or less). 

But enough of this long disputation of the claims made in the project description of the project entitled "Dendritic spine mechano-biology and the process of memory formation."  Now let's look at what the million-dollar project has published so far in the way of results. We can see that by going to this page looking at the section entitled "PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH." The last three of these papers do not mention memory or engrams, so we may assume that they did nothing to substantiate claims about neural storage places of memory (engrams).  The only paper mentioning memory or engrams in its title is a paper entitled "Exploring the F-actin/CPEB3 interaction and its possible role in the molecular mechanism of long-term memory." The paper can be read in full here

The paper does not do anything to substantiate claims that memories are stored in engrams in the brain.  The paper merely presents a speculative chemistry model and some speculative computer simulations.  No experiments with animals have been done, and no research on human brains has been done. Apparently, there were no lab experiments of any type done, with all of the "experimentation" going on inside computers. The computer simulations do not involve the biochemical storage or preservation of any learned information.  The quotes below help show the wildly speculative nature of the paper (I have put in boldface words indicating that speculation is occurring). 

"Here we study the interaction between actin and CPEB3 and propose a molecular model for the complex structure of CPEB3 bound to an actin filament... Our model of the CPEB3/F-actin interaction suggests that F-actin potentially triggers the aggregation-prone structural transition of a short CPEB3 sequence....The CPEB/F-actin interaction could provide the mechanical force necessary to induce a structural transition of CPEB oligomers from a coiled-coil form into a beta-sheet–containing amyloid-like fiber...This beta-hairpin acts as a catalyst for forming intramolecular beta-sheets and could thereby help trigger the aggregation of CPEB3....These beta-sheets could, in turn, participate in further intermolecular interactions with free CPEB3 monomers, triggering a cascade of aggregation....Several possible mechanisms by which SUMOylation could regulate the CPEB3/F-actin interaction are discussed in SI Appendix....
We propose that SUMOylation of CPEB3 in its basal state might repress the CPEB3/F-actin interaction....Furthermore, the beta-hairpin form of the zipper suggests that it might be able to trigger extensive beta-sheet formation in the N-terminal prion domain, PRD....The beta hairpin form of zipper sequence is a potential core for the formation of intramolecular beta sheets... The maintenance of the actin cytoskeleton and synaptic strength then might involve the competition between CPEB3 and cofilin or other ABPs....We also propose that the CPEB3/F-actin interaction might be regulated by the SUMOylation of CPEB3, based on bioinformatic searches for potential SUMOylation sites as well as SUMO interacting motifs in CPEB3....We therefore propose that SUMOylation of CPEB3 is a potential inhibitor for the CPEB3/F-actin interaction."

The wildly speculative nature of the paper is shown by the boldface words above, and by the sentence at the end of the paper's long "Results" section: "Further experimental and theoretical work is required to determine which, if any, of these mechanisms is operating in neurons."  Note well the phrase "which, if any" here. This is a confession that the authors are not sure a single one of the imagined effects actually occur in a brain. 

In this case the US government paid a million dollars for essentially a big bucket of "mights" and "coulds," and the authors do not seem confident that any of these speculative effects actually occur in the brain.  Whatever is going on here, it doesn't sound like science with a capital S (which I define as facts established by observations or experiments).  Even if all of the wildly speculative "mights" and "coulds" were true, it still would not do a thing to show that memories lasting fifty years can be stored in dendritic spines and synapses that do not last for years, and are made up of proteins that have average lifetimes of only a few weeks. The idea that changes in synapse strength can store complex learned information has never made any sense.  Information is physically stored not by some mere strengthening but by when some type of coding system is used to write information using tokens of representation. Never does a mere strengthening store information.  The idea that you store memories by synapse strengthening makes no more sense than the idea that you learn school lessons by strengthening your arm muscles.  If memories were stored as differences in synapse strengths, you could never recall such memories: because the brain lacks any such thing as a synapse strength reader. 

Wednesday, December 22, 2021

A New Paper Reminds Us Neuroscientists Can't Get Their Story Straight About Memory Storage

There is a new scientific paper with the inappropriate title "Where is Memory Information Stored in the Brain?" This is not the question we should be asking. The question we should be asking is: "Is memory information stored in the brain?"  Although it was probably not the intention of the authors (James Tee and Desmond P. Taylor), what we get in the paper is a portrait of how neuroscientists are floundering around on this topic, like some poor shark that is left struggling in the sand after going after its prey too aggressively. 

Tee and Taylor claim this on page 5: "Based on his discovery of the synapse as the physiological basis of memory storage, Kandel was awarded the year 2000 Nobel Prize in Physiology or Medicine (Nobel Prize, 2000)." This is a misstatement about a very important topic. The Nobel Prize listing for Kandel does not mention memory. The official page listing the year 2000 Nobel Prize for physiology states only the following: "The Nobel Prize in Physiology or Medicine 2000 was awarded jointly to Arvid Carlsson, Paul Greengard and Eric R. Kandel 'for their discoveries concerning signal transduction in the nervous system.' " The Nobel committee did not make any claim that synapses had been discovered as the basis of memory. 

Before making this claim about the Nobel Prize, Tee and Taylor  state something that makes no sense. They state, "The groundbreaking work on how memory is (believed to be) stored in the human brain was performed by the research laboratory of Eric R. Kandel on the sea slug Aplysia (Kupfermann et al., 1970; Pinsker et al., 1970)." How could research on a tiny sea slug tell us how human beings store memories?  The paper in question can be read here. The paper fails to mention a testing of more than a single animal, thereby strongly violating rules of robust experimental research on animals (under which an effect should not be claimed unless at least 15 subjects were tested).  We have no reliable evidence about memory storage from this paper. If the paper somehow led to its authors getting a Nobel Prize, that may have been a careless accolade.  The Nobel Prize committee is pretty good about awarding prizes only to the well-deserved, but it may occasionally fall under the gravitational influence of scientists boasting about some "breakthrough" that was not really any such thing. 

Equally undeserving of a Nobel Prize was the next research discussed by our new paper on memory storage: research claiming a discovery of "place cells" in the hippocampus. John O' Keefe published a paper in 1976 claiming to detect "place units" in the hippocampus of rats. The paper also used the term "place cells."  The claim was that certain cells were more active when a rat was in a certain spatial position. The paper did not meet standards of good experimental science. For one thing, the study group sizes it used were way too small for a robust evidence to have produced.  One of the study group sizes consisted of only five rats, and another study group size consisted of only four rats.  15 animals per study group is the minimum for a moderately convincing result.  For another thing no blinding protocol was used. And the study was not a pre-registered study, but was apparently one of those studies in which an analyst is free to fish for whatever effect he may feel like finding after data has been collected. 

The visuals in the study compare wavy signal lines collected while a rat was in different areas of an enclosed unit. The wavy signal lines look pretty much the same no matter which area the rats were in. But O'Keefe claims to have found differences.  No one should be persuaded that the paper shows robust evidence for an important real effect.  We should suspect that the analyst has looked for stretches of wavy lines that looked different when the rat was in different areas, and chosen stretches of wavy lines that best-supported his claim that some cells were more active when the rats were in different areas.  Similar Questionable Research Practices (with similar too-small study groups such as four rats) can be seen in O'Keefe's 1978 paper here

Although O'Keefe's 1976 paper and 1978 paper were not at all a robust demonstration of any important effect, the myth that "place cells" had been discovered started to spread around among neuroscience professors.  O'Keefe even got a Nobel Prize. The Nobel Prize committee is normally pretty good about awarding prizes only when an important discovery has been made for which there was very good evidence. Awarding O'Keefe a Nobel Prize for his unconvincing work on supposed "place cells" seems like another flub of the normally trusty Nobel Prize committee. Even if certain cells are more active when rats are in certain positions (something we would always expect to observe from chance variations), that does nothing to show that there is anything like a map of spatial locations in the brain of rats. 

On page 7 of the new paper on memory storage, we have a discussion of equally unconvincing results:

"LeDoux found that this conditioned fear resulted in LTP (strengthening of synapses) in the auditory neurons of the amygdala, to which he concluded that the LTP constituted memory of the conditioned fear. That is, memory was stored by way of strengthening the synapses, as hypothesized by Hebb."

You may understand why this is nothing like convincing evidence when you realize that synapses are constantly undergoing random changes. At any moment billions of synapses may be weakening, and billions of other synapses may be strengthening.  So finding some strengthening of synapses is no evidence of memory formation. It is merely finding what goes on constantly in the brain, with weakening of synapses occurring just as often as strengthening. The new paper on memory storage confesses this when it says on page 8 that: "synapses in the brain are constantly changing, in part due to the inevitable existence of noise." 

On pages 8-9 of the new paper, Tee and Taylor say that scientists had hopes that there would be breakthroughs in handling memory problems by studying synapses, but that "the long-awaited breakthroughs have yet to be found, raising some doubts against Hebb’s synaptic [memory storage] hypothesis and the subsequent associated experimental findings." Tee and Taylor give us on page 9 a quotation from two other scientists, one that gives a great reason for rejecting theories of synaptic memory storage:

"If we believe that memories are made of patterns of synaptic connections sculpted by experience, and if we know, behaviorally, that motor memories last a lifetime, then how can we explain the fact that individual synaptic spines are constantly turning over and that aggregate synaptic strengths are constantly fluctuating? How can the memories outlast their putative constitutive components?"

Tee and Taylor  then tell us that this problem does not just involve motor memories:

"They further pointed out that this mystery existed beyond motor neuroscience, extending to all of systems neuroscience given that many studies have found such constant turn over of synapses regardless of the cortical region. In order words, synapses are constantly changing throughout the entire brain: 'How is the permanence of memory constructed from the evanescence of synaptic spines?' (Bizzi & Ajemian, 2015, p. 92). This is perhaps the biggest challenge against the notion of synapse as the physical basis of memory."

Tee and Taylor then discuss various experiments that defy the synaptic theory of memory storage.  Most of the studies are guilty of the same Questionable Research Practices that are so extremely common in neuroscience research these days, so I need not discuss them.  We hear on page 14 about various scientists postulating theories that are alternatives to the synaptic theory of memory storage:

"The logical question to pose at this point is: if memory information is not stored in the synapse, then where is it? Glanzman suggested that memory might be stored in the nucleus of the neurons (Chen et al., 2014). On the other hand, Tonegawa proposed that memory might be stored in the connectivity pathways (circuit connections) of a network of neurons (Ryan et al., 2015). Hesslow emphasized that memory is highly unlikely to be a network property (in disagreement with Tonegawa), and further posited that the memory mechanism is intrinsic to the neuron (in agreement with Glanzman) (Johansson et al., 2014)."

You get the idea? These guys are in disarray, kind of all over the map, waffling around between different cheesy theories of memory storage. All of the ideas mentioned above have their own fatal difficulties, reasons why they cannot be true.  In particular, there is no place in a neuron where memory could be written, with the exception of DNA and RNA; and there is zero evidence that learned knowledge such as episodic memories and school lessons are stored in DNA or RNA (capable of storing only low-level chemical information).  Human DNA has been extremely well-studied by long well-funded multi-year research projects such as the Human Genome Project completed in 2003 and the ENCODE project, and no one has found a bit of evidence of anything in DNA that stores episodic memory or any information learned in school.

Tee and Taylor then give us more examples of experiments that they think may support the idea of memories stored in the bodies of neurons (rather than synapses). But they fail to actually support such an idea because the studies follow Questionable Research Practices.  For example, they cite the study here, which fails to qualify as a robust well-designed study because it uses study group sizes as small as 9, 11 and 13. To give another example, Tee and Taylor cite the Glanzman study here, which  fails to qualify as a robust well-designed study because it uses study group sizes as small as 7. Alas, the use of insufficient sample sizes is the rule rather than the exception in today's cognitive neuroscience, and Tee and Taylor seem to ignore this problem.  

The heavily hyped Glanzman study (guilty of Questionable Research Practices) claimed a memory transfer between aplasia animals achieved by RNA injections. Such a study can have little relevance to permanent memory storage, because RNA molecules have very short lifetimes of less than an hour. 

Finally in Tee and Taylor's paper, we have a Conclusions section, which begins with this confession which should cause us to doubt all claims of neural memory storage: "After more than 70 years of research efforts by cognitive psychologists and neuroscientists, the question of where memory information is stored in the brain remains unresolved."  This is followed by a statement that is at least true in the first part: "Although the long-held synaptic hypothesis remains as the de facto and most widely accepted dogma, there is growing evidence in support of the cell-intrinsic hypothesis."  It is correct to call the synaptic memory hypothesis a dogma (as I have done repeatedly on this blog). But Tee and Taylor commit an error in claiming "there is growing evidence in support of the cell-intrinsic hypothesis" (the hypothesis that memories are stored in the bodies of neurons rather than synapses that are part of connections between neurons).  There is no robust evidence in support of such a hypothesis, and the papers Tee and Taylor have cited as supporting such a hypothesis are unconvincing because of their Questionable Research Practices such as too-small sample sizes. 

On their last two page the authors end up in shoulder-shrugging mode, saying, "while the cell might be storing the memory information, the synapse might be required for the initial formation and the subsequent retrieval of the memory."  We are left with the impression of scientists in disarray, without any clear idea of what they are talking about, rather like some theologian speculating about exactly where the angels live in heaven, bouncing around from one idea to another.  In their last paragraph Tee and Taylor speculate about memories being inherited from one generation to another by DNA, which is obviously the wildest speculation. 

Our takeaway from Tee and Taylor's recent paper should be this: scientists are in baffled disarray on the topic of memory. They have no well-established theory of memory storage in the brain, and are waffling around between different speculations that contradict each other.  We are left with strong reasons for suspecting that scientists are getting nowhere trying to establish a theory of memory storage in the brain.  This is pretty much what we should expect if memories are not stored in brains, and cannot be stored in brains.  Always be very suspicious when someone says something along the lines of, "What scientists have been teaching for decades is not true, but they have a new theory that has finally got it right." More likely the new theory is as false as the old theory. 

If anyone is tempted to put credence in this "cell-intrinsic hypothesis" of memory storage, he should remind himself of the physical limitations of DNA.  DNA uses what is called the genetic code. The genetic code is shown below. The A, C, T and G letters at the center stand for the four types of nucleotide base pairs used by DNA:  adenine (A), cytosine (C), guanine (G), and thymine (T). Different triple combinations of these base pairs stand for different amino acids (the twenty types of chemicals shown on the outer ring of the visual below). 

So DNA is profoundly limited in what it can store. In the human body DNA can only store low-level chemical information. We know of no way in which DNA in a human body could store any such things as information learned in school or episodic memories.  Such things cannot be stored using the genetic code used by DNA.  No one has ever found any evidence that strings of characters (such as memorized text) are stored in human DNA, nor has anyone found any evidence that visual information is stored in human DNA. Moreover, if we had to write memories to DNA or read memories from DNA, it would be all-the-more impossible to explain the phenomena of instant memory formation and instant memory retrieval. 

Some have suggested that DNA methylation marks might be some mechanism for memory storage. This idea is very unbelievable. DNA methylation is the appearance of a chemical mark on different positions of DNA.  The chemical mark is almost always the same H3C addition to the cytosine nucleotide base pair.  These chemical marks serve as transcription suppressors which prevent particular genes from being expressed. Conceptually we may think of a DNA methylation mark as an "off switch" that turns off particular genes. 

The idea that the collection of these chemical "off switches" can serve as a system for storing memories is unbelievable. DNA is slowly read by cells in a rather sluggish process called transcription, but there is no physical mechanism in the body for specifically reading only DNA methylation marks. If there were anything in the body for reading only DNA methylation marks, it would be so slow that it could never account for instant memory recall.  We know the purpose that DNA methylation marks serve in the body: the purpose of switching off the expression of particular genes. Anyone claiming that such marks also store human memories is rather like some person claiming that his laundry detergent is a secret system for storing very complex information. 

A metric relevant to such claims is the maximum speed of DNA transcription. The reading of DNA base pairs occurs at a maximum  rate of about 20 amino acids per second, which is about 60 nucleotide pairs per second.  This is the fastest rate, with preparatory work being much slower. DNA methylation occurs only for one of the four base pairs, meaning that no more than about 15 DNA methylation marks could be read in a second (after slower preparatory work is done).  

Let us imagine (very implausibly) that DNA methylation marks serve as a kind of binary code for storing information.  Let us also imagine (very implausibly) that there is a system by which letters can be stored in the body, by means of something like the ASCII code, and by means of DNA methylation.  Such a system would have storage requirements something like this:

Letter

ASCII number equivalent

Binary equivalent

A

12

1100

B

13

1101

C

14

1110


Under such a storage system, once the exact the spot had been found for reading the right information (which would take a very long time given that the brain has no indexing system and no position coordinate system), and after some chemical preparatory work had been done to enable reading from DNA, information could be read at a rate of no more than about four characters per second. But humans can recall things  much faster than such a rate. When humans talk fast, they are speaking at a rate of more than two words per second (more than 10 characters per second).  So if you ask me to describe how the American Civil War began and started and ended, I can spit out remembered information at a rate several times faster than we can account for by a reading of DNA methylation marks, even if we completely ignore the time it would take to find the right little spot in the brain that stored exactly the right information to be recalled. 

A realistic accounting of the time needed for memory recall of information stored in binary form by DNA methylation would have to add up all of these things:
  • The time needed for finding the exact spot in the brain where the correct recalled information was stored (requiring many minutes or hours or days, given no indexing and no coordinate system in the brain);
  • The time needed for chemical preparatory work that would have to be done before DNA can be read (such as the time needed to get RNA molecules that can do the reading);
  • Reading DNA methylation marks (encoding binary numbers) at a maximum rate of no more than four characters per second (and usually a much slower rate because of a sparse scattering of such marks);
  • Translating such binary numbers into their decimal equivalent;
  • Translating such decimal numbers into character equivalents;
  • Translating such retrieved letters into speech.
All of this would be so slow that if memories were stored as DNA methylation marks, you would never be able to speak correct recalled information at a rate a tenth as fast as two words per second, as humans can do. Similarly, you would never be able to form new memories instantly (as humans are constantly doing) if memory storage required writing binary information as DNA methylation marks, which would be a very slow process.  Humans can form new memories at the same rate at which they can recall memories. Suppose you are leaving to go food shopping and someone in your house says, "Please buy me a loaf of whole wheat bread and some orange juice." You may form a new memory of those exact words, at a rate of two words per second.  Storing such information as DNA methylation marks would be much slower than such a rate. 

I may note that while scientists can read DNA and DNA methylation marks from neural tissue, no one has ever found the slightest speck of human learned information stored in DNA or DNA methylation marks, synapse strengths, or any other type of representation in the brain; nor has anyone found any evidence of any coding scheme by which letters or numbers or visual images are stored in human DNA or DNA methylation marks.  When brain surgeons remove half of a brain (to treat very severe seizures) or remove portions of a brain (to treat severe epilepsy or cancer), they discard the cut-out brain tissue, and do not try to retrieve memory information stored in it.  They know that attempting such a thing would be utterly futile. 

It has been claimed by the few proponents of memory stored in the DNA methylation marks that such marks are a stable medium for writing information. But search for "DNA methylation turnover" and you will find contrary claims, such as a paper entitled "Rapid turnover of DNA methylation in human cells." 

Wednesday, December 15, 2021

Scientific American's "New Clues" on Mind Origins Sound Like a Handful of Moonbeams

Scientific American recently published an article by two biology professors, an article on the origin of mind.  We have a clickbait title of "New Clues About the Origin of Biological Intelligence," followed by a misleading subtitle of "A common solution is emerging in two different fields: developmental biology and neuroscience."  Then, contrary to their subtitle, the authors (Rafael Yuste and Michael Levin) state, "While scientists are still working out the details of how the eye evolved, we are also still stuck on the question of how intelligence emerges in biology."  So now biologists are saying they are still stuck on both of these things? 

Funny, that's a claim that contradicts what biologists have been telling us for many decades. For many decades, biologists have made the bogus boast that the mere "natural selection" explanation of Charles Darwin was sufficient to explain the appearance of vision, a claim that has never made any sense,  because so-called natural selection is a mere theory of accumulation that does not explain any cases of vast organization such as we see in vision systems and their incredibly intricate biochemistry.  Vastly organized things (such as bridges and cells and TV sets and protein complexes) are not mere accumulations (examples of which are snowdrifts, leaf piles and drain sludge buildup). And biologists have also for many decades been making the equally bogus boast that they understand the origin of human minds, based on the claim that it was just an evolution of bigger or better brains (a claim that is false for reasons explained in the posts on this blog). 

It would be great if our Scientific American article was a frank explanation of why scientists are stuck on such things.  But instead the article is an example of a staple of science literature: an article that not-very-honestly kind of claims "we're getting there" on some explanatory problem which scientists are actually making little or no  progress on. To read about the modus operandi of many articles of this type, read my post " 'We're Getting There' Baloney Recurs in Science Literature." 

We quickly get an inkling of a strategy that will be used by the authors.  It is a strategy similar to the witless or deceptive strategy Charles Darwin used in The Descent of Man when he claimed this near the beginning of Chapter 3: “My object in this chapter is to show that there is no fundamental difference between man and the higher mammals in their mental faculties." The statement was a huge falsehood, and it is easy to understand why Darwin made it. The more some biologist tries to shrink and minimize the human mind,  like someone saying the works of Shakespeare are "just some ink marks on paper," the more likely someone may be to believe that such a biologist can explain the mind's origin. The more a biologist  dehumanizes humans, making them sound like animals, the more likely someone may be to think that such a biologist can explain the origin of humans. 

Rather seeming to follow such a strategy, the authors (Yuste and Levin) try to fool us into thinking there is nothing very special about intelligence. They write this:

"In fact, intelligence—a purposeful response to available information, often anticipating the future—is not restricted to the minds of some privileged species. It is distributed throughout biology, at many different spatial and temporal scales. There are not just intelligent people, mammals, birds and cephalopods. Intelligent, purposeful problem-solving behavior can be found in parts of all living things: single cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks."

Notice the gigantically shrunken and downgraded definition of intelligence, as a mere "purposeful response to available information."  Under such a definition, a smoke detector is intelligent, and bicycle brakes are intelligent (because they respond to information about foot pressure or hand pressure); and an old round 1960's Honeywell thermostat is also intelligent, because if I set the thermostat to 70, and it got much colder outside, the thermostat turned up the heat to keep the temperature at 70.  But smoke detectors and bicycle brakes and old Honeywell thermostat are not intelligent, and neither are the much newer computerized thermostats that are marketed as "intelligent thermostats."  

The Merriam-Webster dictionary gives us two definitions of intelligence: 

"(1) the ability to learn or understand or to deal with new or trying situations REASON.

(2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)."

Very obviously, such a definition does not apply to some of the things that our Scientific American biologists have claimed are intelligent: "single cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks.Such things may be driven or may have been designed by some mysterious intelligent power greater than the human mind, but they are not intelligent themselves.  Protein molecules, ribosomes and individual cells do not have minds or intelligence.  Rather than referring to such things as examples of "biological intelligence," Yuste and Levin should have merely called such things examples of "biological responsiveness." 

Our authors then give us a paragraph that is misleading and poorly reasoned. We read this:

"A common solution is emerging in two different fields: developmental biology and neuroscience. The argument proceeds in three steps. The first rests on one of natural selection’s first and best design ideas: modularity. Modules are self-contained functional units like apartments in a building. Modules implement local goals that are, to some degree, self-maintaining and self-controlled. Modules have a basal problem-solving intelligence, and their relative independence from the rest of the system enables them to achieve their goals despite changing conditions. In our building example, a family living in an apartment could carry on their normal life and pursue their goals, sending the children to school for example, regardless of what is happening in the other apartments. In the body, for example, organs such as the liver operate with a specific low-level function, such as controlling nutrients in the blood, in relative independence with respect to what is happening, say, in the brain."

The claim that "modularity" was one of "natural selection's first and best design ideas" is false. A module is defined by the Cambridge Dictionary as "one of a set of separate parts that, when combined, form a complete whole."  In computing and spacecraft and education, each module is itself a complex thing that can exist independently, and such complex modules can be combined to form units of greater complexity. A classic example of modularity is the Lunar Excursion Module (LEM) of the Apollo spacecraft, which detached from the main spacecraft to land on the moon, returning later to reunite with the main spacecraft.  Nowhere did Darwin discuss modules.  Darwin's idea was that complex things arise by an accumulation of countless tiny changes.  Such an idea is very different from thinking that very complex organisms arise from a combination of modules.  And complex organisms do not arise from a combination of independent modules. The organs of the human body are not at all independent of each other. Every organ in the body depends on the correct function of several other organs in the body, besides having additional bodily dependencies. 

The claim the authors make of a liver existing "in relative independence" is untrue.  A liver would shut down within a single day if either the heart or the lungs or the brain were removed (brains are necessary for the autonomic function of the heart and the lungs).  The liver would not last more than a few weeks if the kidneys or the stomach were removed.  Instead of being independent modules, the cells and organs of the body are gigantically interdependent. The existence of such massively interdependent objects in bodies (with so many cross-dependencies)  makes it a million times harder for biologists to credibly explain biological origins, and makes a mockery of their boastful claims to understand such origins. So it is no surprise that biologists frequently resort to misleading statements denying or downplaying such massive interdependence, statements like the statement I quoted in italics above.  

The diagram below gives us a hint of the cross-dependencies in biological systems, but fails to adequately represent them. A better diagram would be one in which there were fifty or more arrows indicating internal dependencies. 

complex biological systems

Our authors have not even got apartment buildings right.  I live in an apartment that is one of many in my building. My apartment is certainly not an independent module. It is dependent on the overall plumbing system and gas system and heating system and electrical system shared by the entire building. 

The authors (Yuste and Levin) then discuss hierarchical organization. Hierarchical organization is certainly a very big aspect of physical human bodies. Subatomic particles are organized into atoms, which are organized into amino acids, which are organized into protein molecules, which are organized into protein complexes, which are organized into organelles, which are organized into cells, which are organized into tissues, which are organized into organs, which are organized into organ systems, which are organized into organisms.  This is all the greatest embarrassment for today's biologists, who lack both a theory of the origin of hierarchical organization, and any theory at all of biological organization (Darwinism being a mere theory of accumulation, not a theory of organization).  

Contrary to what our Scientific American authors insinuate, hierarchical organization is not a good description of minds. Our minds have no organization anything like the hierarchical organization of our bodies. So our authors err by suggesting  hierarchical organization as some kind of "new clue" in understanding the origin of minds.  Here is their vaporous reasoning with no real substance behind it:

"In biology, different organs could belong to the same body of an organism, whose goal would be to preserve itself and reproduce, and different organisms could belong to a community, like a beehive, whose goal would be to maintain a stable environment for its members. Similarly, the local metabolic and signaling goals of the cells integrate toward a morphogenetic outcome of building and repairing complex organs. Thus, increasingly sophisticated intelligence emerges from hierarchies of modules."

This is nothing remotely resembling a credible explanation for the origin of human minds that can do math and philosophy and abstract reasoning. The last sentence of the paragraph uses "thus" in a very inappropriate way, for none of the preceding talk explains how humans could get minds. Our minds are not "hierarchies of modules."  Instead of being independent modules, different aspects of our minds are very much dependent on other aspects of our minds.  Complex thought and language and memory and understanding are not independent modules. With very few exceptions, you cannot engage in complex thought without language and memory; and every time you use language you are relying on memory and understanding (your recall of the meaning of words); and you can't understand much of anything without using your memory. 

Next our Scientific American authors speak in a not very helpful way, using the term "pattern completion" in a very strange way.  Very oddly, they state this:

"A third step in our argument addresses this problem: each module has a few key elements that serve as control knobs or trigger points that activate the module. This is known as pattern completion, where the activation of a part of the system turns on the entire system."

Whatever the writers are talking about, it does nothing to explain minds. Yuste and Levin end by trying to cite some research dealing with this "pattern completion" effect they referred to. They cite only a paper that seems to be guilty of the same Questionable Research Practices that most neuroscience experiments are guilty of these days.  It is a mouse experiment that used too-small study group sizes, such as study groups of 6 mice and 7 mice and 9 mice. The authors of the paper state, "We did not use a statistical power analysis to determine the number of animals used in each experiment beforehand." Such a confession is usually made when experimenters have used way-too-small sample sizes, using far fewer than the 15 subjects per study group recommended for robust results. The authors tell us "experimental data were collected not blinded to experimental groups," and makes no claim that any blinding protocol was used.  The paper is therefore not robust evidence for anything supporting the claims of the authors of the Scientific American article.  Because of its procedural defects, the paper provides no robust evidence for what Yuste and Levin claim, that "fascinating pattern-completion neurons activated small modules of cells that encoded visual perceptions, which were interpreted by the mouse as real objects."  The only other paper cited by Yuste and Levin is a self-citation that has nothing to do with the origin of minds. 

Instead of giving us any actual encouragement that scientists have "new clues" as to the origin of minds, the Scientific American article rather leaves us with the impression that mainstream scientists have no good clues about such a thing. You could postulate a credible theory about the origin of human minds, but the "old guard" editors of Scientific American would never publish it. 

What is going on in Levin's latest Scientific American article is the same kind of inappropriate language that Levin abundantly used in a long article he co-authored with Daniel Dennett, one entitled "Cognition All the Way Down." In that article, Levin and Dennett use the word "cognition" and "agents" to refer to things like cells that have neither minds nor cognition.  I don't think either Levin or Dennett actually believe that cells have minds or cognition. Their article reads like something a person might write if he did not believe that cells actually have minds and selves and thoughts, but if he merely thought that speaking as if cells are "agents" with "cognition" is a convenient rhetorical device. The Cambridge Dictionary defines cognition as "the use of conscious mental processes." The same dictonary defines an agent as "a person who acts for or represents another."  

What seems to be  going on is simply that words are being used in improper ways, like someone using the word "gift" to describe a bombing.  It's just what we would expect from Darwinists,  for improper language has always been at the center of Darwinism from its very beginning.  At the heart of Darwinism is the misnomer  phrase "natural selection," which refers to a mere survival-of-the-fittest effect that is not actually selection (the word "selection" refers to choice made by conscious agents).  We should not be surprised that some thinkers who have for so long been talking about the selection-that-isn't-really-selection are now speaking about agents-that-aren't-really-agents and cognition-that-isn't-really-cognition and intelligence-that-isn't-really-intelligence. 

Wednesday, December 8, 2021

The Biggest Brain Projects Are Still Failing to Support Prevailing Brain Dogmas

In a December 2020 post I examined the failure of the two biggest brain research projects to back up claims commonly made about the brain, such as the claim that the brain produces the mind and the claim that brains store memories. Let us now look at how one of those two biggest brain research programs (the Human Brain Project) is still failing to substantiate such claims. The Human Brain Project is a billion-dollar European research project. 

The page here of the Human Brain Project web site is entitled "Highlights and Achievements," and presumably lists the biggest accomplishments of the Human Brain Project. Let's take a look at the items listed at the top of the page, in the year 2021 section.  The first five items merely discuss technology innovations, not anything involving new findings about the brain.  The sixth item is merely an interview with a professor who talks about no specific research findings of the Human Brain Project, and who says that the project has "become a truly enabling endeavor," which is the kind of vague praise that people give when they don't have much in the way of specific achievements to discuss.  Then we have an item merely talking about how humans have some brain cell types not found in mice.  

The next item is entitled "Controlling brain states with a ray of light." We have a statement of never-substantiated neuroscientist dogma:  "The brain presents different states depending on the communication between billions of neurons, and this network is the basis of all our perceptions, memories, and behaviors."  But the page discussing this ray of light research mentions nothing that sounds important.  We merely hear of some light being sent into a brain, with some transition occurring, although the only transition claimed is an awakening from sleep: "This new chemically-engineered tool allowed to induce and investigate in detail, in a controlled and non-invasive way, the transitions of brain from sleep- to awake-like states using direct illumination." Not very impressive, given that we have already long known of a tool for inducing a transition from sleep to awake-like states: the humble alarm clock.

The next item merely mentions work on some robot.  The item after that has the title "EBRAINS powers brain simulations to give insight into consciousness and its disorders." The page discussing this research mentions no progress in understanding how consciousness occurs. It merely mentions some project reading brain waves during normal consciousness and sleep. We have a quote making it sound as if unconsciousness always involves less complex brain waves:

"We can see that unconsciousness is not simply a matter of a loss of brain activity,” Massimini says. “It’s not necessarily weaker. But it is a lot less complex.” 

This statement is only half-true. Brain waves are less complex for patients under anesthesia. But the most complex brain waves are those seen during grand mal seizures (also called tonic-clonic seizures), and during such seizures people are typically unconscious. An EEG reading during a grand mal seizures resembles a seismograph reading during an earthquake. 

The next item is entitled "HBP-researchers find new approach for Energy-Efficient AI Applications," which obviously involves no progress in cognitive neuroscienceThe item after that merely involves brain surgery, not cognitive neuroscience.  The next item merely is something pertaining to spinal cord surgery. 

We then see an item of little significance, merely something about some new technique for modeling dendrites. The item after that is the claim "A new means of neuronal communication discovered in the human brain." The claim is unjustified, being based solely on a paper failing to prevent robust evidence. 

The paper is the paper "Long-range phase synchronization of high-frequency oscillations in human cortex."  The claim of a synchronization effect is not well established.  The paper looked for correlations after analyzing brain wave readings from fewer than 100 people.  A paper like this would only be credible if (a) it was a pre-registered study that declared before any data was gathered a hypothesis to be tested, how the data would be gathered and how the data would be analyzed, and (b) the paper discussed a thorough blinding protocol that was followed.  But there is no mention of any pre-registration of this study, and the paper never mentions any blinding protocol (failing to use the word "blind" in its text).  

So what was going on? Apparently the authors got some EEG readings, and were then absolutely free to analyze the data in any way they wanted, being free to slice and dice the data until they found something they could call "synchronization."  We should have very little confidence in a study following such a method.  Given a body of data and freedom to analyze it any of 1001 ways, it is all too easy to find "synchronization" that is no real effect. For example, if I can compare the wins and losses of sports teams with the ups and downs of stock markets, options markets and bond markets, I could probably find  a little something I could claim as "synchronization." 

While the Human Brain Project site has bragged that "a new means of neuronal communication" has been discovered, the scientific paper behind this claim does not even sound very confident of such a thing, merely saying that some brain oscillations "may be synchronized between widely distributed brain regions." Also, neuron communication does not mean that neurons make our minds or store our memories. 

The last item on the Human Brain Project's list of 2021 highlights is merely a discussion of some paper claiming similarities in the brains of birds and mammals.  We read a claim that "the brains of birds and mammals look surprisingly similar in their organization." This is not at all true, and bird brains look very different from human brains. 

Judging from the Human Brain Project's list of 2021 highlights, the lavishly funded Human Brain Project is not making any progress in verifying the main dogmas of cognitive neuroscientists, the claim that the brain is the source of the human mind, and the claim that brains store memories.  Similarly, we find no support for such dogmas in a recent article entitled "The Human Brain Project: six achievements of Europe’s largest neuroscience programme."

Here are the six achievments listed:

  • "Human brain atlas":  We read about merely fancy descriptions of parts of the brain. 
  • "Synapses in the hippocampus:" We read that "researchers have published detailed 3D-maps of around 25,000 synapses – electrical and chemical signals between brain cells – in the human hippocampus." Such a result does not seem so impressive when you consider that the brain is believed to contain trillions of synapses. Also, you don't explain mental phenomena such as understanding and memory by making maps of synapses or maps of neurons. 
  • "Robot hands":  Obviously this has nothing to do with verifying the claims of cognitive neuroscientists.
  • "A neuro-inspired computer":  The computer described is not anything like a computer having the characteristics of the brain. If you ever built such a computer, it would never work to process data reliably and at high speeds. In digital computers electrical signals travel with 100% reliability, but in the cortex of the brain a signal will only pass across a synapse with a likelihood of 50% or less. Computers have coordinate systems and indexing systems allowing the computer to instantly find the location of some stored data, but brains have no such things. 
  • "Virtual epileptic patient":  This has nothing to do with verifying the claims of cognitive neuroscientists.
  • "Scientific output":  We merely hear a mention that 1497 papers cite the Human Brain Project. 

In the year 2020 section of the "Highlights and Achievements" page of the Human Brain Project, you won't find anything that substantiates the main dogmas about brains taught by neuroscientists. My December 2020 post here discusses the items in that section (as well as the 2019, 2018 and 2017 sections), and explains why they fail to support claims such as the claim that brain make minds and the claim that brains store memories. 

The Human Brain Project is making no progress in supporting claims such as the claim that brains make minds and the claim that brains store memories because such claims are not correct.  But what about the other big brain project, the US-based BRAIN Initiative? In my December 2020 post I examined the failure of that project (as well as the Human Brain Project) to back up claims commonly made about the brain, such as the claim that the brain produces the mind and the claim that brains store memories.  Were there any big results for the BRAIN Initiative in 2021?

Apparently not, judging from the page here which lists 2021 highlights for the BRAIN Initiative.  There is some discussion of brain mapping that has not yet done anything to back up the main dogmas of neuroscience. We see only two stories relevant to whether brains make minds:

  • A story entitled "Neuroprothesis restores words to man with paralysis."
  • A story entitled "Reading Minds with Ultrasound: A Less-Invasive Technique to Decode the Brain's Intentions."
The first story discusses some man who had a stroke leading to brain stem damage causing him to lose the power of speech. Electrodes were planted in his head, to look for some correlation between motor cortex brain activity and attempts of the man to say one of 50 different words. A system was developed wherein the man's attempts to speak can be matched to one of the 50 words.  This merely shows that the brain has a role in the muscle movements related to speech.  It does not prove that the ideas for what to say arise from the brain. 

The story about "reading minds with ultrasound" has a title that is misleading clickbait. The corresponding study was merely done with monkeys.  What's going on is some obscure clear-as-mud business involving trying to predict which of two options (left or right) a monkey will take, based on reading brain states a few seconds before the movement. A good rule of thumb for experimental science is to ignore all studies that did not use at least 15 subjects per study group.  The main results for this study involve experiments on only a single monkey. The study (which shows no sign of using a blinding protocol) is not reliable evidence for any ability to read minds with ultrasound. 

It appears that neither the Human Brain Project in Europe nor the BRAIN Initiative in the US is making progress in supporting claims such as the claim that brains make minds and the claim that brains store memories.  Such progress will never be made because the brain is not the source of our mind, and our brains do not store memories. To find reasons justifying these statements, read the other posts on this blog. 

research flop

In today's science news, we have the results of a project to test the reproducibility of cancer research.  A paper reports little success in reproducing results.  We hear that a large fraction of scientists simply refused to respond to queries from fellow scientists trying to reproduce the results, which is just what we would expect if a significant fraction of published research was fraudulent or defective. Here is a very worrying quote from the abstract:

"We conducted the Reproducibility Project: Cancer Biology to investigate the replicability of preclinical research in cancer biology....However, the various barriers and challenges we encountered while designing and conducting the experiments meant that we were only able to repeat 50 experiments from 23 papers. Here we report these barriers and challenges. First, many original papers failed to report key descriptive and inferential statistics: the data needed to compute effect sizes and conduct power analyses was publicly accessible for just 4 of 193 experiments. Moreover, despite contacting the authors of the original papers, we were unable to obtain these data for 68% of the experiments. Second, none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors. While authors were extremely or very helpful for 41% of experiments, they were minimally helpful for 9% of experiments, and not at all helpful (or did not respond to us) for 32% of experiments."

Can you imagine a more damning statistic about the work quality of today's biological researchers, the fact that "none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments"?

In a separate paper, the researchers found that "the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original," which suggests a high degree of unreliability in biomedical research.  

Wednesday, December 1, 2021

Way Off in Their Predictions, Neuroscientists Keep Misdescribing Human Memory Performance

 In the posts of this blog I have given very many reasons for thinking that the statements of neuroscientists about a brain storage of memories are just plain false. Contrary to the constant claims of neuroscientists that brains store memories, the brain bears no resemblance to a device for storing memories. There is nothing in the brain that resembles some component for storing learned information, and nothing in the brain the resembles some component for reading stored memory information.  The place that neuroscientists usually claim as a storage place for memories (synapses) are places of great instability and turnover that cannot possibly be a storage place for human memories that can last for 50 years or longer.  

Humans are able to recall detailed memories instantly, upon hearing a name or seeing an image.  The brain has no features that can account for such instant recall.  Humans know from their work with computers the type of things a device needs to have to be able to instantly recall stored information: things such as an addressing system or a position notation system, and things such as indexes. The brain has no such thing. Retrieving a memory from a brain would be like trying to get just the right index card (the one and only card storing some data) from a large swimming pool filled with index cards.  Moreover, the low reliability of synaptic transmission and the very high noise levels in brains should make it impossible for anyone to accurately retrieve large bodies of information from a brain.  Conversely, we know that humans can flawlessly retrieve very large bodies of information, such as when Hamlet actors or Wagnerian tenors accurately perform very long roles without an error, and when certain Muslim scholars recite their whole holy book without error. 

Besides such reasons, we have an entirely different reason for thinking that neuroscientists are in their own little fantasy world when it comes to human memory. This is the reason that neuroscientists again and again misdescribe human memory performance and also make very poor predictions about human memory performance. 

We have an example of such misdescribing in a recent article on the online Nautilus magazine, written by the neuroscientist Anil Seth. Entitled "We Are Beast Machines," we have some dehumanizing nonsense talk in which almost everything is an oracular dogmatic proclamation provided without any supporting evidence.  Before stating very absurd drivel telling us "all of our perceptions and experiences, whether of the self or of the world, are inside-out controlled and controlling hallucinations," Seth recalls some early memory and states, "I must have been about 8 or 9 years old, and like all early memories this one too is unreliable."  Here we have a claim that early memories are unreliable. But the claim has been refuted by studies. 

For example, in the paper "Early childhood memories: accuracy and effect," we read that very early childhood memories tend to be accurate:

"Subjects were asked to report the earliest memories of their lives. Where possible, the memory protocols were submitted to adults present at the time of the original episode for possible confirmation. The majority of memories were characterized by distinct emotion, with a higher count of negative than of positive emotion. The majority of memories proved accurate, with confirmation operating at as high a level in the case of positive or emotionally neutral memories as of negative memories. General memory content showed no differential patterns across negative and positive memories. Thus claims that infantile memories are powered uniquely by trauma, and/or routinely include distortions, were not supported."

In Scientific American we read that neuroscientists were not even in the right ballpark when asked to estimate how reliably people would remember things:

"Even memory experts can struggle to predict how accurate our recollections are. In a recent study at the University of Toronto, such experts were asked to predict the accuracy of memories of events that happened two days earlier. While recollections of these events were very good—more than 90 percent correct on average—the experts predicted they would be only 40 percent correct."

So our neuroscientists have a false idea that humans can't remember things well after two days, an idea totally contrary to human memory performance reality.  It's easy to understand why they would make errors of this type.  All the low-level facts we have learned about the brain defy the idea of very good and fast memory performance by neural means. So neuroscientists tend to (1) ignore or deny cases of very high-performance memory; (2) portray human memory as much slower and less reliable than it is. 

Another way in which neuroscientists misdescribe human memory performance is their continued teaching of an utterly false doctrine that humans take quite a while to form new memories, many minutes or even hours.  The reality is that humans are constantly forming new memories instantly. 

There are a hundred ways to prove the reality of instant human memory creation.  The following thought experiment will suffice. Imagine you are watching a movie at home and you see on your TV screen the words "The End."  Now suppose a friend with you then immediately asks: "How did the movie end?" You will be able to correctly answer the question, because you have instantly formed a new memory of the movie's ending.  You will not need to tell your friend, "Give me twenty minutes for my memory of the ending to finish forming, and I will tell you the ending." 

Why do neuroscientists keep teaching this very silly idea that memories take many minutes or hours to form, an idea so gigantically contrary to human experience? It has to do with the incorrect idea they have about how memories form. Most neuroscientists claim that memories form from a strengthening of synapses.  It's an idea that makes no sense.  We have no known case of information ever being stored through some act of strengthening.  The imagined strengthening is something that would take many minutes, because of a need for protein synthesis that occurs at a sluggish pace.  Having wed themselves to this extremely silly idea, neuroscientists are forced to deny one of the most obvious facts of human existence, that people can form new memories instantly. 

Neuroscientists also misdescribe human memory performance when they try to insinuate that permanent new memories required repeated exposures to a sensory stimulus.  This is certainly false.  Let's go back to the example of watching the movie.  What happens if the movie is shown again on TV six months from now? Unless you particularly enjoyed the movie, you will probably decide not to watch it again. Why? Because you remember what happened in the movie, after seeing it only once.  A large fraction of the things that you remember are things that you saw or heard or were taught only a single time. 

Another way in which neuroscientists misdescribe human memory performance is by sometimes denying types of  exceptional memory skills. For example, like many articles written by neuroscientists, a New Scientist article states this:

"Photographic memory is the ability to recall a past scene in detail with great accuracy – just like a photograph. Although many people claim that they have it, we still don’t have proof that it actually exists."

Oh really? So why does a very technical 2019 scientific paper matter-of-factly refer to "a 13-year-old autistic boy with a photographic memory and speech-language deficit"? And how come Stephen Wiltshire has repeatedly shown the ability to accurately draw skylines he has only seen once?  Many children have photographic memory, and neuroscientists are splitting hairs when they try to distinguish between photographic memory and what they call "eidetic" memory, which means basically the same thing. 

In his book Thought and Choice in Chess, Adriaan D. de Groot presented data showing photographic memory in adult grand masters. For example, one of them was able to perfectly reproduce from memory the chess position shown below (page 326), after being shown the board for less than 15 seconds (page 322-323):

In the paper here, we read that in 1894 Binet sent out questionnaires to chess masters, asking them how they remembered chess positions. The masters "almost invariably reported having the chess board stored as a visual image, like a photograph."

A scientific paper reports the following, which contradicts the typical neuroscientist talk about the weakness of memory:

"Overall our results demonstrate the impressive nature of visual long-term memory fidelity, which we find is even higher fidelity than previously indicated in situations involving repetitions. Furthermore, our results suggest that there is no distinction between the fidelity of visual working memory and visual long-term memory, but instead both memory systems are capable of storing similar incredibly high fidelity memories under the right circumstances."