Wednesday, January 24, 2024

No, Scientist, That Isn't Human Memory or Brain Action, But Merely Computer Activity

There are many problems involving human memory that neuroscientists have made no progress on. Neuroscientists have done nothing substantial to answer these questions:

  1. How are humans able to instantly form memories, much faster than can be explained by imagining that synapses are strengthened by protein formation (which takes minutes)?

  2. How are humans able to remember things for 50 years, which is 1000 times longer than the average lifetime of the proteins in synapses?

  3. Why do humans who have their brains shut down during cardiac arrest continue to have extremely vivid near-death experiences that they can remember very well?

  4. How are humans able to instantly recall very old memories despite the lack of any known physical characteristic in the brain (such as indexing, neuron numbering, or a neuron coordinate system) that would allow the brain to perform the “instantly finding the needle in a mountain-sized haystack” operation needed to instantly find an obscure memory?

  5. Why are autistic savants such as Kim Peek so often able to have astonishing mental skills far beyond those of ordinary people, even though such savants often have major brain damage?

  6. How is that people with hyperthymesia (and brains not significantly different from ordinary people) are able to remember in great detail what happened to them every day since reaching adulthood?

  7. How could a human ever be able to memorize vast amounts of words (such as 10 major operatic roles), when the words use a language that is less than a thousand years old, which human biology (having only very old genes) should never be to store as neural states?

  8. How could a brain ever be able to learn or recall anything, when the brain seems nothing resembling a writing mechanism, and nothing resembling a reading mechanism?
  9. How could human minds that learn things and form memories of such great diversity ever remember anything by writing to a brain, when no one has discovered any type of encoding scheme by which human memories and learned information could be converted to neural states or synapse states?
If a neuroscientist restricts himself to the brain, he will never be able to give a convincing answer to any of these questions, although he may fool many people by giving some vacuous answer that is so cluttered up with neuroscience jargon that it serves as a kind of "smoke screen" for the underlying ignorance.  But there is a sneaky trick used by quite a few neuroscientists. The trick is to create something using computer software and computer hardware, and to try to make that cybernetic hi-tech construction sound rather neural or brain-like. There are various devious tactics that can be used as part of this trick:
  • Trick # 1: something that is not at all biological or neural or organic can be misleadingly described with adjectives such as "neural," "synaptic" "cortical" or "hippocampal." The classic example of this misleading trick was when computer programmers created a purely software creation that they called a "neural net," even though it involved no neurons at all.  
  • Trick #2: a neuroscientist can create a purely software innovation that runs on computer hardware, and describe this using some phrase such as a "model of how the brain works" or a "human memory model" or some such phrase.  This involves a deceptive comparison between something that is digital, transistor-dependent and the result of human design, and something (the brain) that is not digital, not transistor-related, and not the result of human design.  
  • Trick #3: Particular parts of some software construction can be given misleading names corresponding to the components of brains.  For example, some class or module or subroutine or section or layer of a computer program can be called a "neuron," a "cell," or a "synapse" or maybe even a "hippocampus" or a "cortex."
A recent example of this kind of misleading affair was a paper entitled "A generative model of memory construction and consolidation." The title is misleading because what the paper describes is a computer software construction that stores and retrieves data. Such stored data is not at all "memory construction" or an example of "memory experiences" in any human sense. 

In the abstract of the paper, we have this sentence that is very misleading: "Here we present a computational model in which hippocampal replay (from an autoassociative network) trains generative models (variational autoencoders) to (re)create sensory experiences from latent variable representations in entorhinal, medial prefrontal and anterolateral temporal cortices via the hippocampal formation."  The authors have misleadingly given the name "hippocampal replay" for some computer software activity that does not involve any brain hippocampus at all, and they have misleadingly used the terms "entorhinal, medial prefrontal and anterolateral temporal cortices" for some computer software activity that does not involve any brain cortices or brain cortex. Consequently, there is no truth in the later claim in their abstract that their "model explains how unique sensory and predictable conceptual elements of memories are stored and reconstructed by efficiently combining both hippocampal and neocortical systems, optimizing the use of limited hippocampal storage for new and unusual information," if by "memories" you mean human memories. 

Figure 1 of the paper shows that the paper is making  use of the Trick #3 described above. We have a diagram referring to parts of something that is purely a computer software implementation running on computer hardware. But these parts are misleading labeled with names taken from neural anatomy.  So some purely software piece of code is called a "hippocampus," and another purely software piece of code is called a "sensory neocortex."  And also the terms "episodic memory" and "imagination" are used for some purely computer software events that are not human mental experiences. The same thing happens throughout the paper, with the brain anatomy term "hippocampus" being repeatedly use for something created by the authors that is purely a software construction.   

The paper makes use of trick-language equivocation involving the word "memory."  In many of the times that the paper uses the term "memory" it means a human memory such as a human acquiring a memory or recalling a memory. But in many other places in the paper the term "memory" is used in an entirely different way, to refer to some computer capability not involving mental experiences.  We all know that computers can store and retrieve images, data and records.  That does nothing to explain human memory, both because the physical details of digital computers and digital software are vastly different from the physical details of human brains, and also  because something merely occurring in a computer is not comparable to a human memory experience.  Computers don't have experiences. 

To review the differences between the human brain and computer systems, read my post "The Brain Has Nothing Like 7 Things a Computer Uses to Store and Retrieve Information." Below are some  things computers have and brains do not have:

  • an operating system (an elaborate set of stored subroutines for general-purpose tasks);
  • a physical component specialized for storing non-genetic data;
  • a physical component specialized for reading stored non-genetic data;
  • a CPU unit that sequentially processes programmed instructions;
  • various applications to store and retrieve data arising from human interactions;
  • the ascii code for encoding information;
  • a decimal to binary conversion table or utility;
  • a medium (such as a hard disk) that allows a permanent, stable storage of information;
  • a storage location system by which the exact position or address of a data item can be specified, allowing the creation of indexes;
  • indexes that use such addresses to allow fast retrieval from an exact location.
Below is a quote from the Supplemental Information document of the above paper, one making clear that the so-called "model" of the paper is a very convoluted computer software processing pipeline unlike anything that the brain could have:

"The following list describes the sequence of operations within the large VAE’s encoder network, using the layer names from the TensorFlow Keras API7 (see also Figure 3):
1. Input layer for arrays of shape (n, 64, 64, 3), representing n 64x64 RGB images 
2. Dropout layer with a dropout rate of 0.2 (during training, dropout randomly sets a fraction of the input units to 0 at each step, reducing overfitting and encouraging robustness) 
3. Conv2D layer with 32 filters (i.e. convolutional windows, or feature detectors) and kernel size of 4 (i.e. windows of 4x4 pixels) 
4. Batch normalisation layer (batch normalisation is a common technique which computes the mean and variance of each feature in a mini-batch and uses them to normalise the activations) 
5. LeakyReLU activation layer (LeakyReLU is an activation function that is a variant of the Rectified Linear Unit, ReLU) 
6. Conv2D layer with 64 filters and kernel size of 4 
7. Batch normalisation layer 
8. LeakyReLU activation layer 
9. Conv2D layer with 128 filters and kernel size of 4 
10. Batch normalisation layer 
11. LeakyReLU activation layer
12. Conv2D layer with 256 filters and kernel size of 4 
13. Batch normalisation layer 
14. LeakyReLU activation layer 
15. Global average pooling 2D layer 
16. Dense layer to produce the mean of the latent vector 
17. Dense layer to produce the log variance of the latent vector (in parallel with the layer above) 
18. Custom sampling layer that samples from the latent space, with the mean and log variance layers as inputs."

But this is only part of the incredibly complex computer software activity that was occurring.  We are told that the software activity also involved this:

"The same information for the decoder network is as follows: 
1. Input layer for arrays of shape (n, latent dimension), where latent dimension is 20 in these results, representing n latent vectors 
2. Dense layer that expands the latent space to a size of 4096 
3. Reshape layer to reshape the input to a 4x4x256 tensor 
4. Upsampling 2D layer with a 2x2 upsampling factor 
5. Conv2D layer with 128 filters and kernel size of 3 
6. Batch normalisation layer 
7. LeakyReLU activation layer 
8. Upsampling2D layer with a 2x2 upsampling factor 
9. Conv2D layer with 64 filters and kernel size of 3 
10. Batch normalisation layer 
11. LeakyReLU activation layer 
12. Upsampling2D layer with a 2x2 upsampling factor 
13. Conv2D layer with 32 filters and kernel size of 3 
14. Batch normalisation layer 
15. LeakyReLU activation layer 
16. Upsampling2D layer with a 2x2 upsampling factor 
17. Conv2D layer with 3 filters and kernel size of 3"

All of these software layers were built by purposeful human programming activity using human constructed high-tech tools such as compilers, programming languages and integrated development environments (IDEs). Such layers can be built only if you have a digital high-tech transistor-based infrastructure. Such artificial hi-tech software constructions have no relevance to explaining how a brain could store or retrieve memories, because brains have no such infrastructure, and nothing corresponding to any such layers. 

The paper authors have provided the programming code they wrote. For example, there is the file here and the file here, in which we have doubly nested loops. Doubly nested programming loops are something that can be done by advanced computer programs, but we know of nothing physical in the brain that could correspond to such a thing or allow such a thing.  Similarly, the lines above refer many times to batch programs, which typically are computer programs that are automatically run at some scheduled time. The brain has nothing corresponding to batch programs.  The lines above also refer to hundreds of different filters, but other than the brain blood barrier we know of no filters in the brain.  And the lines above  refer many times to data conversion subroutines, something not corresponding to anything known to physically exist in the brain.  

neuroscientist misspeaking


By trying to insinuate that the software package described above has some relevance to explaining how a human could have mental memory experiences by brain activity, the authors are like someone claiming that his writing "high marks" on a college dormitory wall is a basis for bragging about having achieved "high marks" at college. That is like what occurs throughout the paper discussed above, with the word-trick equivocation occurring not on the word "marks" but on the word "memory." 

The Cambridge Dictonary lists three definitions of memory:
(1) "The ability to remember information, experiences, and people." Each of the seven examples given refer to mental experiences of people. 
(2)  "Something that you remember from the past." Obviously this definition can only refer to mental experiences of people.
(3) "The part of a computer in which information or programs are stored either permanently or temporarily, or the amount of space available on it for storing information."  This definition is entirely different, as it involves only computers and does not involve mental experiences.  A computer can operate overnight in "batch mode" without any human observing what it is doing. 

It is a grotesque abuse of language and  a grotesque fallacy of equivocation to be mixing up the first two definitions with the third, and to fool people into thinking something involving computers (but not involving mental experiences or brain activity) helps to explain what goes on in human minds during mental experiences. Equally grotesque equivocation would occur if you argued that Taylor Swift is a star, that a star is a giant hot sphere of gas, and that therefore Taylor Swift is a giant hot sphere of gas. A more honest title for the paper "A generative model of memory construction and consolidation" would be "Some computer programming we did, which we strangely describe using brain anatomy terms and psychology terms." 

What goes on in papers like this is that misimpressions are created by massive mixing of mentions of dissimilar things. A discussion of a purely software creation running on a digital computer is interspersed everywhere with mentions of human memory, factual mentions of the anatomy of brains, and implausible speculations about how certain parts of brains might be involved in memory. Much of the speculations are things that are ruled out by facts about the brain's physical shortfalls, such as the very short lifetimes of synapse proteins, the unreliable signal transmission in brains, very high levels of neural noise, and the lack of any addresses or indexes in brains. 

neuroscience memory model

Neuroscientists lack any credible explanation for human memory, which cannot be plausibly explained by any type of brain activity.  If someone ever makes you think he has explained how a brain could store and retrieve a memory,  the odds are 100 to 1 that some type of misstatement or vacuous hand-waving or trick language was involved.  The human brain does not have any physical characteristics that can explain the wonders of instant human recall and the wonder of precise, accurate, detailed recollections that can persist for 50 years or longer.  Physical shortfalls of the brain (such as the short lifetimes of synaptic proteins, very high neural noise, lack of addresses and indexes, and unreliable synaptic transmission) exclude the brain as a credible explanation for human memory.  Such shortfalls are of the greatest importance in judging this matter, but are senselessly ignored by neuroscientists. 

No comments:

Post a Comment