Sunday, June 26, 2022

What the Neuroscientist Should Have Said When Asked About Mind Uploading

The web site The Conversation at www.theconversation.com is one of numerous mainstream web sites that attempt to propagate the talking points of materialist thinkers, usually in a very one-sided way in which all kinds of very important relevant facts are hidden from readers.  Recently the site had an article entitled "When Will I Be Able to Upload My Brain to a Computer?" A neuroscientist professor named Guillaume Thierry answers a reader's question, which was this:

"I am 59 years old, and in reasonably good health. Is it possible that I will live long enough to put my brain into a computer? Richard Dixon."  

Professor Thierry answered the question in a poor fashion. He spoke largely as if the underlying assumption behind the question was a valid one. He should have discussed many facts of neuroscience that indicate the underlying assumption behind the question is an incorrect one. Although the question is rather awkwardly phrased, it is rather clear what assumption was behind the question asked by Mr. Dixon. The assumption was that there is some information and matter arrangement in a brain which somehow constitutes a person, and that it might be possible to transfer such information and matter arrangement to a computer. 

The first thing that Professor Thierry should have discussed is that there is zero evidence that brains store information in the way that computers store information. Computers store information in a binary format which is also sometimes called a digital format. In such a format information is stored by a series of ones and zeros, such as 101010011010010101100101010111000. There is no evidence that brains store information in any such manner. There seems to be nothing in a brain or neurons or synapses that would allow storage of information in any such manner.  We can imagine a physical structuring of an organ that would allow the storage of long binary sequences such as 10010101000011110001010101000111100101.  The brain seems to have no physical component allowing any such storage. 

We do know that neurons store information, but the only information ever discovered in a neuron is genetic information, the information stored in the nucleus of every cell. Such information is merely low-level chemical information, such as which amino acids make up particular protein molecules.  The only type of information that has been discovered in neurons is the same low-level chemical information found in kidney cells and skin cells and heart cells and foot cells. No one has ever discovered any binary sequence such as 10010101000011110001010101000111100101 in a human brain. 

Besides mentioning that there is no sign of any mechanism in a brain that could possibly store digital or binary information such as used by computers, Professor Thierry should have also mentioned that there is simply no physical signs of learned information stored in a brain in any kind of non-digital or non-binary organized format that resembles some kind of system of representation. We can imagine other ways in which information could be stored in a brain, some way that did not involve the simplicity of repeated ones and zeroes. If any other way was used, it would tend to have an easily detected hallmark: the hallmark of token repetition.  There would be some system of tokens, each of which would represent something, perhaps a sound or a color pixel or a letter. There would be very many repetitions of different types of symbolic tokens.   Some examples of tokens are given below. Other examples of tokens include nucleotide base pairs (which in particular combinations of 3 base pairs represent particular amino acids), and also coins and bills (some particular combination of coins and bills can represent some particular amount of wealth). 

symbolic tokens
Examples of symbolic tokens

Other than the nucleotide base pair triple combinations that represent mere low-level chemical information such as amino acids, something found in neurons and many other types of cells outside of the brain, there is no sign at all of any repetition of symbolic tokens in the brain. Except for genetic information which is merely low-level chemical information, we can find none of the hallmarks of symbolic information (the repetition of symbolic tokens) inside the brain. No one has ever found anything that looks like traces or remnants of learned information by studying brain tissue. If you cut off some piece of brain tissue when someone dies, and place it under the most powerful electron microscope, you will never find any evidence that such tissue stored information learned during a lifetime, and you will never be able to figure out what a person learned from studying such tissue.  This is one reason why scientists and law enforcement officials never bother to preserve the brains of dead people in hopes of learning something about what such people experienced during their lives, or what they thought or believed, or what deeds they committed.    

Besides seeing no signs of stored memory information in brains, scientists are completely lacking in any detailed credible theory of how it is that a brain could store the type of things that people learn during their lifetimes. The difficulties of coming up with such a theory are endless. One gigantic difficulty is that humans learn a dizzying variety of things (sights, sounds, sensations, words, music, feelings, thoughts, concepts, muscle movements, and so forth), meaning that no imaginable system of symbolic encoding could handle even a third of the types of things people can learn. Another difficulty is that people are capable of remembering extremely long sequences of words and letters. But the very alphabets that are used to store such letters have only existed for a few thousand years.  There is no evidence that humans have undergone some great brain change in the past few thousand years that might help to explain a storage of great amounts of information using alphabets that have only existed for a few thousand years. A scientific paper discussing human evolution in the past two thousand years tells us that "aside from height and body mass index (BMI), evidence for selection on other complex traits has generally been weak," and that there are merely faint signals for human evolution in a few other areas: "increased infant head circumference and birth weight, and increases in female hip size; as well as on variants underlying metabolic traits; male-specific signal for decreased BMI; and in favor of later sexual maturation in women, but not in men," in addition  to "strong signals of selection at lactase and the major histocompatibility complex, and in favor of blond hair and blue eyes." There is no mention of any dramatic brain evolution that might explain a recent ability to store memories using alphabets that only arose in the past few thousand years. 

The same problem exists in regard to explaining a human ability to remember oral music. Such music is expressed using a musical notation system that is only a few centuries old. But humans have no problem remembering vast lengths of oral music. In his prime performing years Placido Domingo was famous for having memorized male operatic roles in countless different operas, which altogether made up very many hours of singing he could perform without error. 

The inability of neuroscientists to explain such wonders of memorization is not some minor shortfall. There is literally not a neuroscientist in the world who can give a credible detailed explanation of how anyone could store the simple phrase "my dog has fleas" in his brain or even the first line of the song "Mary had a little lamb." Yet there are Islamic scholars who have memorized every line of their holy book of 114 chapters, and actors and singers who have perfectly memorized very long roles such as Hamlet and Siegfried.

When asked to explain such things, all neuroscientists can do is mention little facts that fail to sound anything like an explanation for human memory.  They may utter phrases such as "synaptic strengthening," ignoring the fact that the lifetimes of the proteins that make up synapses are about 1000 times shorter than the maximum length of time that humans can remember thing.  The failure of neuroscientists to explain other aspects of human mentality is just as large. No neuroscientist has a credible explanation for such basic human mental realities as imagination or abstract thinking or insight.   

On another web page a neuroscientist seems to confess that he and his colleagues have no idea of how groups of neurons could give rise to thoughts or emotions. He states this:

"We need to understand how circuits of cells give rise to a thought, an emotion, a behavior. And this will be extremely difficult to penetrate.” 

I have repeatedly argued on this blog (in posts such as this and this)  that physical limitations of brains mean that brains should be way too slow to account for things such as lightning-fast human thinking and recall. I found a scientific paper in which scientists confess just how bad is the speed problem within human brains. In the paper "Emission of Mitochondrial Biophotons and their Effect on Electrical Activity of Membrane via Microtubules," six scientists (some of them neuroscientists) make this interesting confession:

"Synaptic transmission and axonal transfer of nerve impulses are too slow to organize coordinated activity in large areas of the central nervous system. Numerous observations confirm this view. The duration of a synaptic transmission is at least 0.5 ms, thus the transmission across thousands of synapses takes about hundreds or even thousands of milliseconds. The transmission speed of action potentials varies between 0.5 m/s and 120 m/s along an axon. More than 50% of the nerves fibers in the corpus callosum are without myelin, thus their speed is reduced to 0.5 m/s. How can these low velocities (i.e. classical signals) explain the fast processing in the nervous system?"

Rather than candidly confessing such realities when asked about loading brains into computers, Professor Thierry speaks like someone with an underlying attitude of "we haven't done this yet because it's very hard." What he should have said is something like, "You should have every doubt that such a thing is possible, no matter how much we learn about the brain or computers."  Similarly, suppose you ask a soil expert, "When will I be able to know all about the lives of all the previous owners of my land by analyzing the land's soil?" Such an expert will be giving you the wrong answer if he talks about how such a thing is hard. He will be pointing you in the right direction if he tells you there is no good reason to think that such a thing will ever be possible.

In responding to the question, Professor Thierry acted like a typical neuroscientist, by using the question to try and impress us by listing many little facts that he has learned. In answering the question he should have candidly confessed all of the things he does not know and does not understand about brains, mentality and memory. But neuroscientists don't like getting started on such a discussion, which rapidly leads us to questions that cause us to doubt the dogmas that neuroscientists keep spouting. So I'm sure Professor Thierry would have preferred not to start discussing his ignorance of why people near death so often report themselves floating out of their bodies and observing their bodies from above, a type of observation entirely inconsistent with claims that brains are the source of the human mind.  And I'm sure Professor Thierry would have preferred not to start discussing his ignorance of how humans are able to perfectly recall vast bodies of information, even though each synaptic gap  transmits signals with a reliability of less than 50%, which should make such recall impossible if it were occurring from neural activity. And I'm sure Professor Thierry would have preferred not to start discussing his ignorance of how mind and memory is well-preserved after half of a brain is removed to treat very bad seizures in epileptics, an observational reality dramatically inconsistent with the dogmas of neuroscientists. 

Instead of telling us the neuroscience reality that no one has ever found any memory information by studying brain tissue, Professor Thierry advanced a groundless and easily discredited  speculation when he stated this: "Information in the brain is stored in every detail of its physical structure of the connections between neurons: their size and shape, as well as the number and location of connections between them."  No one has any understanding of how learned information such as facts learned in school could ever be represented by changes in the sizes, shapes, numbers or locations of connections between neurons, nor does anyone have any credible detailed theory of how information could be stored in such a way. No evidence of symbolic tokens or information representation can be found by studying such connections. To the contrary, what we have learned about such connections (synapses) suggests the impossibility of the claim  Thierry states. We know that synapses are "shifting sands" type of things, not stable structures that stay the same for decades. The proteins in synapses have average lifetimes of only a few weeks.  Synapses are connected to unstable structures called dendritic spines, which have typical lifetimes of only a few weeks or months, and which don't last for years.  See my post "Imaging of Dendritic Spines Hint That Brains Are Too Unstable to Store Memories for Decades" for the relevant observations. 

Given all this structural instability in synapses and their attached dendritic spines, and the constant very high levels of molecular turnover in such things,  we should not believe the speculation that synapses are storing human memories which can survive with remarkable stability for 50 years or longer. Resembling the tangled, ever-changing vines in a dense part of the Amazon rain forest, synapses no more resemble an information storage system than do some jumble of such vines. No neuroscientist could ever even tell a credible detailed tale of how the mere phrase "my dog has fleas" could be stored by some variation in the size, shape, strength, number or location of brain connections (synapses). 

Alarm bells should go off in our minds when we read Professor Thierry state this: "The brain seamlessly and constantly integrates signals from all the senses to produce internal representations, makes predictions about these representations, and ultimately creates conscious awareness (our feeling of being alive and being ourselves) in a way that is still a total mystery to us." When someone claims that something occurs in way that is a total mystery to him, it is often the case that no such thing is actually occurring. We have no actual evidence that brains "produce internal representations" from sensory signals, and no permanent signs of such internal representations can be found by studying brain tissue. We know that humans make predictions, but do not know that brains make predictions, nor do we know that brains create conscious awareness. From near-death experiences that may involve vivid conscious awareness during cardiac arrest in which the heart has stopped and the brain is shut down, we have a very strong reason for doubting claims that brains produce conscious awareness. In general we should tend to be skeptical about claims that x produces y when such claims are made by people confessing that such a thing happens in some way that is a total mystery. 

If some old person is afraid of death and asks you about mind uploading, you might think: don't burst the guy's bubble and throw cold water on his hopes. But there's no reason to keep afloat hopes of immortality by mind uploading. A much better thing to do would be to explain all the reasons why it is utterly fallacious to think that you will be able to transfer your mind and memory into a robot or computer, and to include within that discussion some mention of how such reasons (and also many other reasons) should lead you to suspect that your mind and memory will survive the death of your body, largely on the grounds that there is nothing in your brain or body that can explain your mind and your memory.    

Sunday, June 19, 2022

Don't Be Fooled: Well-Trained Chatbots Aren't Minds

This week we have a story in the news about artificial intelligence. It seems that a Google engineer named  Blake Lemoine told the Washington Post that he thought a Google project called  LaMDA had reached "sentience," a term implying some degree of consciousness. The Washington Post article said, "“Most academics and AI practitioners … say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.” 

Humans are rather easily fooled by chatbots, computer programs designed to imitate human speech. The first chatbot was a program called ELIZA developed in the 1960's by Joseph Weizenbaum. The program was designed to imitate a psychoanalysist.  ELIZA used simple programming tricks.  For example, if someone typed a statement with a form such as "I am bothered by X," ELIZA might ask a question such as "How long have you been bothered by X?" or "Why do you think you are bothered by X?"  

Weizenbaum experimented with ELIZA by having people type on a computer terminal, interacting with an unseen agent that could have been either a real person or a mere computer program. Weizenbaum was surprised to find that a large fraction of the people interacting with the ELIZA program thought that they were conversing with a real person.  At the time computer programming was in a very primitive state. The lesson was clear: even some rudimentary programming tricks can be sufficient to fool people into thinking that they are talking to a real person, when they are merely talking to a chatbot (a computer program designed to imitate human speech). 

Now software is far more advanced, and we have systems that make ELIZA look very primitive in comparison. One type of chatbot is the experts system chatbot, which has been well-trained in some very specific knowledge domain.  A person talking to such a chatbot may be convinced he is talking to someone who really understands the subject matter involved.  For example, if you talk to a podiatrist chatbot, the program may seem to know so much about foot health problems that you might swear you are talking to someone who really understands feet.  But whenever there is a very limited knowledge domain,  thousands of hours of computer programming can be sufficient to create an impression of understanding. 

Then there are what we may call general knowledge chatbots.  Such programs are trained on many thousands of hours of online conversations between real humans. After such training it is relatively easy for a program to pick up response rules from pattern matching. 

I will give an example. The game Elden Ring is currently very popular, largely because of its wonderful graphics. Imagine if you train your pattern-matching chatbot AI software to eavesdrop on thousands of conversations between young men, and there occurs many an exchange like this:

Human #1: So, dude, you played any good PS4 or X-box games recently?

Human #2: Yeah, I'm playing Elden Ring. Man, the graphics are out-of-this world! But it's freaking hard. You gotta earn so many of these "rune" things. 

elden ring screenshot
A visual from the "Elden Ring" game

After training on many conversations that included an exchange like this, our AI chatbot pattern-matching software picks up a rule: when you are asked about good recent PS4 or X-box games, mention Elden Ring, and mention that the game has great graphics, but is hard to play.  Through similar training, the AI chatbot pattern-matching software picks up thousands of response rules, which can change from month to month.  A person interacting with the software will be very impressed.  For example:

  • Ask the software about computer games, and it will talk about whichever game is now popular, and say the things people are saying about that game.
  • Ask the software about TV shows, and it will talk about whatever shows are the most popular, and will say the kind of things people are saying about such shows.
  • Ask the software about recent movies, and it will talk about whatever movies are the most popular, and will say the kind of things people are saying about such movies.
  • Ask the software about celebrities, and it will repeat whatever  celebrity gossip is making the rounds these days. 
  • Ask the software about its politics, and it will say whatever political sentiments are the most popular in recent days.

With such powerful pattern-matching going on, it's all too easy to be fooled into thinking you are chatting with someone with real understanding about a topic. In fact, the software has zero understanding of any of the topics it is talking about. For example, a well-designed pattern matching software trained on thousands of hours of conversations about baseball may end up sounding like someone who understands baseball, even though the software really doesn't understand the slightest thing about baseball. 

Psychology professor Gary Marcus states the following:

"Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, drawn from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient. Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun....To be sentient is to be aware of yourself in the world; LaMDA simply isn’t. It’s just an illusion, in the grand history of ELIZA, a 1965 piece of software that pretended to be a therapist (managing to fool some humans into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test....What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean."

Imagine if someone could get silicon computers to really understand things.  Then we would very soon see computer systems that did not just sound as smart as humans, but which sounded much smarter than humans. Since you can connect together thousands of computer CPUs without any limitation such as the limitation of fitting within a skull, once truly comprehending computers had been invented, we would soon see computers speaking ten times more intelligently than humans or a hundred times more intelligently than humans. But you will never see that. All you will ever see is chatbots that  use pattern matching well enough so that they sound like humans of average intelligence, when asked average questions. And such chatbots won't even perform well when asked subtle rarely-asked questions using words that have multiple meanings. For example, if you mention that there are three types of Mustangs (a mustang horse, a Ford Mustang car, and a P-51 Mustang fighter-bomber plane), and you ask how well each type can fit inside each other, or ask whether each type could be disassembled and then successfully reassembled, or how well each type could be made without human assistance, a chatbot will "flame out and crash" like a P-51 Mustang shot down by an anti-aircraft gun. 

Sunday, June 12, 2022

11Authorities Seem to Realize That "Your Brain Is a Computer" Is a Junk Metaphor

Biologists have long been guilty of passing off dubious metaphors. For example:

(1) Observing the wonders of biology, and having no explanation other than a survival-of-the-fittest effect, biologists have made claims such as "natural selection is an engineer" or "natural selection is a tinker." An engineer is a human who conceives complex ideas about designs that can be implemented. A tinker is usually a person who willfully attempts to improve an existing design by experimental trial and error. A blind natural process having no will, mind, goal or motivation cannot accurately be compared to either an engineer or a tinker; and since such a process does not involve actual selection or choice, it is misleading to describe it with the phrase "natural selection." 

(2) Observing DNA molecules that are mere repositories of low-level chemical information such as which amino acids make up particular protein molecules, quite a few biologists have used misleading metaphors in which DNA is compared to a blueprint or a recipe for making an organism.  Because it does not specify the anatomical structure of an organism or any of its organs or any of its cells, the "DNA as blueprint" metaphor is profoundly misleading.  How a speck-sized ovum is able to progress to become a full-sized human baby is a wonder of origination far beyond the understanding of today's scientists. 

(3) Observing brains that lack some of the main characteristics of computers (such as software, an operating system, and any known facilities for reading and writing new learned information), biologists have repeatedly claimed that the brain is like a computer. Very strangely, this metaphor is offered to try to explain how humans have minds, as if those advancing the metaphor failed to realize the gigantic shortcoming that computers don't have minds, don't have selves, and don't have consciousness.  How can anyone think you can explain a mind and a self and a consciousness by using some metaphor refererring to something (a computer) that is mindless and selfless, without any consciousness? 

Recently at the physics paper server we have a book-length paper by 11 authorities, one entitled "In search for an alternative to the computer metaphor of the mind and brain." The paper consists of different experts expounding on how  "your brain is a computer" fails as a metaphor.  A series of experts is asked four questions:

(1) What do we understand by the computer metaphor of the mind and brain?

(2)  What are some of the limitations of this computer metaphor?

(3) What metaphor should replace the computational metaphor?

(4) What metaphor should replace the computational metaphor?

After a section by Madhur Mangalaml and Damian G. Kelty-Stephen in which they state that "attempts to explain human intelligence by referring to an anatomical organ as an entity that 'computes' is likely a case of circular reasoning,"  we have a section in which the same authors advocate a replacement metaphor of a cascade, making the strange claim that "a hierarchical configuration of events nesting at multiple scales achieves adaptive, context-sensitive behavior through a balance of noise and order." Then we have Paul Cisek offer a replacement metaphor of the brain as a "control system." Then we have Benjamin De Bari and James Dixon giving us a silly classification scheme in which organisms are classified as examples of "dissipative systems." It's more shrink-speaking reductionism in which humans are described like they were some mere physics process. 

Then we have Luis H. Favela who makes this assessment of the lack of very notable progress in the heavily-funded Human Brain Project:

"At eight years in, HBP leadership published a list of the project’s six most impressive achievements (Sahakian et al., 2021). These include a human brain atlas visual data tool, touch-based telerobot hand, neuro-inspired computer, and being cited in 1,497 peer-reviewed journal articles. There should be no doubt that much of this research is impressive, particularly when put into various contexts, such as the potential for advancing robotic limbs to improve the lives of people who have had amputations. However, it is far from clear whether any of these achievements have illuminated our understanding of brains and minds in a significant way." 

The high point comes in the discussion by Fred Hasselman in Section 6.2 (page 69). Hasselman refers us to these neuroscience case histories:

"When MRI scans of the brain show a large black hole inside the skull of a patient, indicative of a liquid occupying 50 –75% of the volume that typically contains vast amounts of interconnected neurons, anyone would be surprised to learn the patient is an otherwise healthy 44-years-old French civil servant, married, with children (Feuillet et al., 2007). In China, a 24-year-old woman, married with a daughter, went to a hospital because of persisting nausea and was found to be the 9th recorded case of Cerebellar agenesis: her cerebellum was missing completely (Yu et al., 2015). Due to Rasmussen syndrome, a  3-years-old Dutch girl underwent surgery to remove her language dominant hemisphere. This chronic focal encephalitis had caused a severe regression of language skills, but at age seven, except for slight spasticity of the left arm and leg, she is living an everyday life and is fully bilingual in Turkish and Dutch (Borgstein and Grootendorst, 2002)."

good minds with bad brains

A table from the paper

Hasselman gives a reference to a paper by Marek Majorek, which cites a page from The Lancet of 9 February 2002. We see a picture of a girl lacking almost half of her brain. The picture caption (from The Lancet) reads this:

"This 7-year-old girl had a hemispherectomy at the age of 3 for Rasmussen syndrome (chronic focal encephalitis). Incurable epilepsy had already led to right-sided hemiplegia and severe regression of language skills. Though the dominant hemisphere was removed, with its language centres and the motor centers for the left side of her body, the child is fully bilingual in Turkish and Dutch, while even her hemiplegia has partially recovered is only noticeable by a slight spasticity of her left arm and leg. She leads an otherwise normal life."

Referring to operations removing half of a brain to treat very severe recurrent seizures, Hasselman then states this: "Vining et al.(1997) studied the burden of illness in 58 children who had undergone hemispherectomy due to various kinds of debilitating afflictions of the brain and, remarkably, found that most children were better off with half a brain: 'We are awed by the apparent retention of memory after removal of half of the brain, either half, and by the retention of the child’s personality and sense of humor.' " Hasselman mentions appeals to "youthful brain plasticity" as an explanation for such retention, something which makes no sense. If memories are stored in the brain, you should lose half of those memories if half of the brain is removed, and no conceivable amount of "plasticity" or "adaptability" could explain the retention of such memories. Hasselman states this:

"Consider the case of E.C., a 47-year-old right-handed, right-eyed patient who had his left (language) dominant cerebral cortex removed (Smith, 1966). E.C. had a pre-operative performance I.Q. (WAIS) of 108. Seven months after his dominant hemisphere was removed, his performance I.Q. was 104. He scored 85 out of 112 items correct on a verbal comprehension test. One would expect that removing a hemisphere storing many decades of unique traces of experienced events would scale to a much larger effect on I.Q. and cognitive ability."

Hasselman proposes a hypothesis of "Radical Embodied Cognition" in which "a massively redundant reality exists that is composed of many nested spatial and temporal scales on which physical processes interact by exchanging energy, matter and information." Later we have a writer who lectures us about resonances in the brain, and an expert who argues the obscure idea that the brain is a "fractal antenna." 

All in all, the paper gives us a further basis for drawing this conclusion: claims that your brain is a computer are futile and fallacious. Such claims are fallacious partially because the brain has nothing like seven things that a computer uses to store and retrieve information (as discussed here). 

To the contrary, there are the strongest reasons for thinking that brains cannot possibly be the cause of lightning-fast human thinking and memory recall. They include the following:
  • The fact that no one has the slightest idea of how any arrangement of neurons could ever cause the arising of abstract ideas. 
  • The fact that severe slowing factors (involving things such as cumulative synaptic delays) and many types of severe signal noise should make it impossible for brains to produce the instant accurate recall routinely occurring in humans and the lightning fast accurate thinking that occurs in people such as math savants who can produce very complex calculations with astonishing speed. 
  • The fact that unreliable synaptic transmission (occurring with less than 50% reliability in a chemical synapse) should make accurate memory recall and very accurate thinking impossible, contrary to the reality that humans such as Hamlet actors can recall large bodies of text with perfect accuracy, and other humans can do very complex mental calculations "in their head" with perfect accuracy.
  • The fact that not the slightest sign can be found of human learned information by microscopically examining brain tissue, and the fact no one even has a workable detailed theory of how human learned information (such as facts learned in school) could be translated into neural states or synapse states. 
Trying to prove the brain is a computer is a futile, because if you were to prove such a thing, you would not explain consciousness and self-hood. Computers don't have selves, and are no more conscious than a stone. 

Although they all still seem to prefer the idea that the brain is the source of the mind, the 11 paper authors have mentioned many observations that undermine such a claim and conflict with it. Had the authors been willing to touch upon the abundant evidence for observations of the paranormal (such as evidence for out-of-body experiences during cardiac arrest when the brain has shut down), they could have mentioned many additional observational facts that undermine claims that the brain is the source of the human mind. 

I will end with a quote from one of the papers cited by the paper I have discussed, a paper by Marek Majorek. He states this:

"It appears that the theory that electrical impulses recorded in the brain are traces of ‘information processing’ taking place within individual neurons and/or in neuronal assemblies, and ultimately leading to the emergence of consciousness in its varied and rich facets, is a fairy tale. There was a time, not very long ago, when serious scientists of the period adhered to the doctrine of abiogenesis, i.e. were convinced that life can arise spontaneously from inorganic matter. Not only did the great, but from today’s perspective rather ancient, Aristotle think that it was a ‘readily observable truth’ that aphids arise from the dew which falls on plants, fleas from putrid matter, mice from dirty hay, crocodiles from rotting logs at the bottom of bodies of water, and so on (cf. Lennox, 2001), but still in the seventeenth century Alexander Ross wrote: ‘To question [spontaneous generation] is to question reason, sense and experience. If he doubts of this let him go to Egypt, and there he will find the fields swarming with mice, begot of the mud of Nylus, to the great calamity of the inhabitants’ (Ross, 1652). We know better today, of course. It seems justified to claim that currently widespread beliefs attempting to interpret consciousness as a form of emergent property of purely physical systems are just as deeply mistaken about their subject matter as the beliefs of abiogenists concerning the origin of living organisms were about theirs. Just as mice cannot arise of the mud of the Nile, so consciousness and other more complex mental phenomena cannot arise from the ‘mud’of the firings of neurons in the brain. Thus the question, ‘Where can it arise from?’ imposes itself on us with renewed urgency."

Friday, June 3, 2022

Studies Debunk Hippocampus Memory Myths

Neuroscientists have often made the claim that the hippocampus is necessary for the formation of new memories. For example, one paper claimed that "clinical evidence indicates that damage to the hippocampus produces anterograde amnesia."  Anterograde amnesia is an inability to form new memories.  There was never any good evidence for such claims. 

To back up claims such as the one above, some people cite the case of patient H.M, a patient with a damaged hippocamous.  For example, the paper quoted above states that patient H.M. "became unable to consciously recollect new events in his life or new facts about the world."  This is not correct. A 14-year follow-up study of patient H.M. (whose memory problems started in 1953) actually tells us that H.M. was able to form some new memories. The study says this on page 217:

"In February 1968, when shown the head on a Kennedy half-dollar, he said, correctly, that the person portrayed on the coin was President Kennedy. When asked him whether President Kennedy was dead or alive, and he answered, without hesitation, that Kennedy had been assassinated...In a similar way, he recalled various other public events, such as the death of Pope John (soon after the event), and recognized the name of one of the astronauts, but his performance in these respects was quite variable."

Another paper tells us that patient H.M. was able to learn new motor skills., stating this: "H.M. could successfully acquire, and subsequently retain, new motor skills in the context of several other experimental tasks (e.g., rotary pursuit, bimanual tracking, tapping)."  Another paper ("Evidence for Semantic Learning in Profound Amnesia: An Investigation With Patient H.M.") states this:

"We used cued recall and forced-choice recognition tasks to investigate whether the patient H.M. had acquired knowledge of people who became famous after the onset of his amnesia. Results revealed that, with first names provided as cues, he was able to recall the corresponding famous last name for 12 of 35 postoperatively famous personalities. This number nearly doubled when semantic cues were added, suggesting that his knowledge of the names was not limited to perceptual information, but was incorporated in a semantic network capable of supporting explicit recall. In forced-choice recognition, H.M. discriminated 87% of postmorbid famous names from foils. Critically, he was able to provide uniquely identifying semantic facts for one-third of these recognized names, describing John Glenn, for example, as 'the first rocketeer' and Lee Harvey Oswald as a man who 'assassinated the president.' Although H.M.’s semantic learning was clearly impaired, the results provide robust, unambiguous evidence that some new semantic learning can be supported by structures beyond the hippocampus proper."

Another paper tells us on page 15 to 16 that patient H. M. was able to draw an accurate map of a bungalow he had moved to after his hippocampus damage, and that he was also able to correctly state the address of that bungalow. On page 20 that paper tells us that patient H.M. had close-to-normal skills in being able to remember visual information:

"In  a  picture  recognition  experiment,  H.M.  was  asked  to  look  at  complex colorful  pictures,  for  20  seconds  each,  and  to  try  to  remember  them  (Freed & Corkin, 1988 et al.,  1987). H.M.'s performance revealed a normal recognition of the  pictures  after 10,  24  hours,  72  hours,  and  one  week  after  encoding (Augustinack et al., 2014; Corkin, 2020; Freed & Corkin, 1988). Surprisingly, six months later, H.M. could score within one standard deviation from the controls’ average (Freed & Corkin, 1988)."

The myth that patient H.M. lost the ability to form new memories after hippocampus damage is therefore an "old wives' tale" of neuroscience literature, a claim not justified by the facts. I may also note that it is  not scientific to cite a patient with one physical issue and some other problem, and to claim or insinuate that the problem was caused by the physical issue. Using the same logic, you could take someone with hair loss and a problem concentrating, and claim that the problem concentrating was caused by the hair loss.  Ideas about a cause of something can only be soundly derived from studies involving many patients, not just one or a few. 

It is remarkable that scientific papers are still quoting the first paper on patient H.M. as evidence for the claim that the patient could not form new memories. That paper ("LOSS OF RECENT MEMORY AFTER BILATERAL HIPPOCAMPAL LESIONS") failed to document in any scientific way an inability of patient H.M. to form new memories. It provides as its sole evidence an anecdotal account of an interview of April 26, 1955. This is the only evidence provided:

"This was performed on April 26, 1955. The memory defect was immediately apparent. The patient gave the date as March, 1953, and his age as 27. Just before coming into the examining room he had been talking to Dr. Karl Pribram, yet he had no recollection of this at all and denied that anyone had spoken to him. In conversation, he reverted constantly to boyhood events and seemed scarcely to realize that he had had an operation."

From this scant anecdotal evidence (which easily could involve inaccurate recollection or misinterpretation by one or more doctors), the authors jump to the conclusion that the patient H.M. "appears to have a complete loss of memory for events subsequent to bilateral medial temporal-lobe resection 19 months before."  This is a conclusion not justified by any scientific tests reported in the paper. In fact, the paper tells us that patient H.M. scored a Memory Quotient score of 67 on the Wechsler Memory Scale (WCS-1) test. That is a score that would have been impossible if patient H.M. had been unable to form new memories. The claim of the authors that patient H.M. "appears to have a complete loss of memory for events subsequent to bilateral medial temporal-lobe resection 19 months before" is a claim contradicted by the test score result reported by the authors in their paper.  Judging from the paper here (particularly Table 1), scoring 67 on the first version of the WCS-1 test would have required getting roughly 40 answers correct on the Wechsler Memory Scale (WCS-1) test, something nobody ever could have done if they could not form new memories. 

The main research paper on the hippocampus and memory is the paper "Memory Outcome after Selective Amygdalohippocampectomy: A Study in 140 Patients with Temporal Lobe Epilepsy." That paper gives memory scores for 140 patients who almost all had the hippocampus removed to stop seizures.  Using the term "en bloc" which means "in its entirety" and the term "resected" which means "cut out," the paper states, "The hippocampus and the parahippocampal gyrus were usually resected en bloc."  The paper refers us to another paper  describing the surgeries, and that paper tells us that hippocampectomy (surgical removal of the hippocampus) was performed in almost all of the patients. 

The "Memory Outcome after Selective Amygdalohippocampectomy" paper does not use the word "amnesia" to describe the results. That paper gives memory scores that merely show only a modest decline in memory performance.  The paper states, "Nonverbal memory performance is slightly impaired preoperatively in both groups, with no apparent worsening attributable to surgery."  In fact, Table 3 of the paper informs us that a lack of any significant change in memory performance after removal of the hippocampus was far more common than a decline in memory performance, and that a substantial number of the patients improved their memory performance after their hippocampus was removed. 

A 2020 paper is entitled "Preserved visual memory and relational cognition performance in monkeys with selective hippocampal lesions." The paper states this:

"We tested rhesus monkeys on a battery of cognitive tasks including transitive inference, temporal order memory, shape recall, source memory, and image recognition. Contrary to predictions, we observed no robust impairments in memory or relational cognition either within- or between-groups following hippocampal damage."

Citing a previous study, the paper notes that "formation of new memories in the object-in-scene task, one of the most accepted tests of episodic memory used with nonhuman primates, was found to be unaffected by lesions of the hippocampus itself."  It also notes that "There is a concerning lack of clear causal evidence for a critical role of the hippocampus in visual memory, episodic memory, recollection, or relational cognition in nonhuman primates."

To test the effects of hippocampus damage, the study authors injected five rhesus monkeys with neurotoxins. They estimate that this damaged about 75% of the hippocampus structures of the monkeys (Figure 1). The monkeys were subjected to a wide variety of cognitive tests. The paper concludes this:

"Contrary to dominant theories, we found no evidence that selective hippocampal damage in rhesus monkeys produced disordered relational cognition or impaired visual memory. Across a substantial battery of cognitive tests, monkeys with hippocampal damage were as accurate as intact monkeys and we found no evidence that the two groups of monkeys solved the tasks in different ways."

These results were similar to those reported by the paper here, entitled "Nonnavigational spatial memory performance is unaffected by hippocampal damage in monkeys." The study tested five monkeys. The study states the following, noting that the monkey that performed the best on one memory test was in the group of hippocampus-damaged monkeys, not the control group of normal monkeys:

"Hippocampal damage did not reduce memory span or slow acquisition. Monkeys with hippocampal damage and control monkeys did not differ in the memory span they achieved during training (mean: HP = 4.4, C = 3.8; median = 4 for both groups; t8 = 1.09, p = .305). The monkey that progressed to the longest memory span (6) was in the hippocampal group (Table 1)."

The 1998 paper "Object Recognition and Location Memory in Monkeys with Excitotoxic Lesions of the Amygdala and Hippocampus" gave 11 monkeys "selective lesions of the amygdala and hippocampus made with the excitotoxin ibotenic acid." According to Table 1, the average hippocampus damage for seven of the monkeys was 73%. We read the following

"Postoperatively, monkeys with the combined amygdala and hippocampal lesions performed as well as intact controls at every stage of testing. The same monkeys also were unimpaired relative to controls on an analogous test of spatial memory, delayed nonmatching-tolocation. It is unlikely that unintended sparing of target structures can account for the lack of impairment; there was a significant positive correlation between the percentage of damage to the hippocampus and scores on portions of the recognition performance test, suggesting that, paradoxically, the greater the hippocampal damage, the better the recognition."

A 2019 paper describing experiments with rhesus monkeys is entitled "Nonnavigational spatial memory performance is unaffected by hippocampal damage in monkeys."  A 2023 paper did a meta-analysis of many studies testing memory on monkeys who had been given lesions of the hippocampus. The study was entitled "Reevaluating the role of the hippocampus in memory: A meta-analysis of neurotoxic lesion studies in nonhuman primates." Here is figure 5B from the paper:


hippocampus damage and memory performance

The graph is exaggerating the differences, because it using a scale starting at 50% rather than 0%, which is a graph trick that makes small differences look twice as big. Even with the "make the differences look bigger" trick, we see nothing very impressive in regard to the hippocampus. With a short delay and a long delay, there is merely a minimal difference, with the hippocampus-damaged monkeys performing a few percent worse. With a medium delay, the hippocampus-damaged monkeys performed a little bit better. These results fail to back up claims that the hippocampus is crucial for memory. 

Here is graph 6A from the paper. The graph plots the amount of hippocampus damage on one axis, and the performance on the memory test on the other axis.  The graph tells no clear tale. In three of the studies, very good performance (90% or better) occurred despite very high damage of the hippocampus (75% or more ). In seven of the studies, good performance (85% or better) occurred despite 50% or greater hippocampus damage. Figure 3C of the paper shows that the studies involving hippocampus damage of 75% or greater involved about 15 animals per study, while the studies involving hippocampus damage of less than 70% used an average of only about 8 subjects. So we should be granting more weight here to the results shown in the upper right of the diagram below, results showing heavy hippocampus damage and little performance damage. 

hippocampus damage and memory performance

The 1997 paper "Differential Effects of Early Hippocampal Pathology on Episodic and Semantic Memory" reported on three persons with severe hippocampus damage. We read, "Volumetric measurements derived from three-dimensional (3D) data sets showed that in each of the three patients, the hippocampi are abnormally small bilaterally, with volumes ranging from 43 to 61% of the mean value of normal individuals (Figs. 2 and 3A)."  We learn that "all three patients are not only competent in speech and language but have learned to read, write, and spell." We read this:

 "With regard to the acquisition of factual knowledge, which is another hallmark of semantic memory...all three patients obtained scores within the normal range (Table 2). A remarkable feature of Beth’s and Jon’s stores of semantic memories is that they were accumulated after these patients had incurred the damage to their hippocampi."

In a long footnote to Table 2, we get examples of the three patients answering questions based on quite complex writing in front of them, and answering some common knowledge questions, and the answers sound as good as you or I might give. The authors of this paper attempt to persuade us that the three patients suffered from damage to episodic memory. But they give no very strong evidence of such a thing, mainly mentioning that "none is well oriented in date and time, and they must frequently be reminded of regularly scheduled appointments and events, such as particular classes or extracurricular activities," and that "none can provide a reliable account of the day’s activities or reliably remember telephone conversations or messages, stories, television programs, visitors, holidays, and so on," leaving us in the dark about what exactly they mean by "reliably." Did they mean 100% correct, 95% correct, or 90% correct? We can't tell.  Overall, the paper is inconsistent with claims about the hippocampus being essential for memory. 

Postscript: Harvard scientist Karl Lashley did extensive experiments with animals, experiments involving removal or damage to different parts of the brain. In much of what he wrote, it is hard to disentangle the effect of hippocampus damage. But on page 92 of his book Brain mechanisms and intelligence; a quantitative study of injuries to the brain, we have a table that makes it pretty easy to check for how much of an effect damage to the hippocampus has on maze performance in animals who had been trained to run a maze before parts of their brain were removed. The column on the far right lists the type of lesion the animal had. A letter H stands for hippocampus, N stands for No Injury, F stands for Fornix, R stands for right, L stands for left, and the numbers 1, 2 and 3 stand for grade of injury from slight to severe.

Lashley states this:

"If we select all cases which made more than 25 errors in retention tests, we find that there is no area of [brain] destruction common to all. For example, cases 100, 107 and 111 all show very serious [maze performance] loss, making from 5 to 47 times as many errors in postoperative retention tests as the normal average for learning. Their lesions are compared in Figure 23, which shows no significant overlap between them."

These findings were contrary to the dogma that the hippocampus is crucial to memory. Below is what the table tells us about some of the cases. The results are inconsistent with claims that the hippocampus is crucial for memory. 

Case #

Total brain tissue loss (%)

Hippocampus damage?

Total training time, seconds (post-operative)

Errors

Trials

Comment

98

21.1

Medium damage on right and left hippocampus

310

34

15

Much better performance than in case 100, which had no hippocampus damage but similar brain loss damage

96

20.6

Medium damage on  left hippocampus

63

51

Excellent  performance, with hippocampus damage and one fifth brain loss

100

21.5

None

7539

768

75

Weak performance, but no hippocampus damage

107

25.4

None

11536

689

150

Weak performance, but no hippocampus damage

111

28.3

Medium damage on right and left hippocampus plus septum damage

2230

127

48

Much better performance than cases 107 and 100, despite medium-level hippocampus damage and more brain tissue loss.

114

31.1

Small damage on both right and left hippocampus. About one third of brain removed.

12

1

1

Very good memory performance despite heavy brain damage and some hippocampus damage

116

33.9

Severe damage on right and left hippocampus

2836

547

150

Weak performance with very bad hippocampus damage and one third of brain loss, but much better performance than cases 100 and 107 where there was no hippocampus damage and  less brain damage


Case 116 in the data above (described in the last row of the table above) debunks claims that the hippocampus is essential for memory. In that case an animal with severe damage to both the right and left hippocampus was able to learn, and learn better than some animals with no hippocampus damage and less brain damage. Case 114 in the data above defies all claims that memories are stored in brains, as it involves excellent memory performance in an animal with one third of the brain removed. 

In the paper "Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory" you can read here, we are told this:

"Some kinds of learning appear to be completely unaffected by hippocampal system lesions.... Examples of forms of learning that are spared are gradually acquired skills that emerge over several sessions of practice, such as the skill of tracing a figure viewed in a mirror (B. Milner, 1966), reading mirror-reversed print (N. J. Cohen & Squire, 1980), or anticipating subsequent items in a sequence governed by a complex stochastic grammar (Cleeremans, 1993). Hippocampal patients also appear to be spared in their ability to learn the structure common to a set of items: They are as good as normals in judging whether particular test items come from the same prototype, or were generated by the same finite-state grammar, as the members of a previously studied list (Knowlton, Ramus, & Squire, 1992; Knowlton & Squire, 1993)....In animal studies, it is clear that some forms of classical or instrumental conditioning of responses to discrete salient cues are unaffected by hippocampal system damage (for reviews, see Barnes, 1988; O'Keefe & Nadel, 1978; Rudy & Sutherland, 1994)."

The paper here refers to humans with hippocampus lesions, and tells us "their acquisition of new skills appears to be completely intact."

The paper "Hippocampal Lesion Patterns in Acute Posterior Cerebral Artery Stroke" did memory tests on patients with lesions of the hippocampus. We have some memory tests on patients who had damage to the hippocampus because of a stroke infarct, who are referred to below as HI patients (hippocampal infarct patients).  The groups referred to are those with a right hippocampus infarct, and those with a left hippocampus infarct. We read this:

"In the MMSE, the patients reached a score of 24.30±3.91 (lying in the mildly impaired range), with no difference between groups, t(18)=1.33, P=0.202. In the Clock Drawing Test, the patients reached a score of 2.84±1.26 (at the border of the normal range), with no difference between groups, t(17)=0.51, P=0.618."  

In regard to results of a RBMT test of long-term verbal memory, we read this: "Compared to normative samples, the scores of patients with left HI were within the mildly impaired range, whereas the scores of patients with right HI were only slightly below the mean of the normative sample."  Overall, this paper supports the claim that the hippocampus is not some crucial component of memory. The people with hippocampus damage have done only slightly less than normal on memory tests.