Thursday, August 7, 2025

Misstatements About Lonni Sue Johnson Are Like Misstatements About Henry Molaison

 A recent article in Scientific American is an article entitled "You Don't Remember Being a Baby, But Your Brain Was Making Memories."  The article provides no real evidence that brains create memories, and  its attempts to support such a claim are mostly references to junk science studies. 

The author is a neuroscientist named Nick Turk-Browne who fills up his article with unfounded claims and bad reasoning. First he suggests the reason people cannot remember their first five years is that the hippocampus is not active during those years. That makes no sense. The hippocampus is active during the first five years of life. 

Turk-Browne then repeats the very frequently repeated false claim that patient H.M. (Henry Molaison) suffered hippocampus damage in adulthood that made him unable to form new memories, saying that Henry Molaison was "unable to store memories,"  The claim is not true. 

Henry Molaison (patient H.M.)  was able to remember very many things from his life before his hippocampus damage. A 14-year follow-up study of patient H.M. (whose memory problems started in 1953) actually tells us that H.M. was able to form some new memories. The study says this on page 217:

"In February 1968, when shown the head on a Kennedy half-dollar, he said, correctly, that the person portrayed on the coin was President Kennedy. When asked him whether President Kennedy was dead or alive, and he answered, without hesitation, that Kennedy had been assassinated...In a similar way, he recalled various other public events, such as the death of Pope John (soon after the event), and recognized the name of one of the astronauts, but his performance in these respects was quite variable."

Another paper ("Evidence for Semantic Learning in Profound Amnesia: An Investigation With Patient H.M.") tells us this about patient H.M., clearly providing evidence that patient HM could form many new memories:

"We used cued recall and forced-choice recognition tasks to investigate whether the patient H.M. had acquired knowledge of people who became famous after the onset of his amnesia. Results revealed that, with first names provided as cues, he was able to recall the corresponding famous last name for 12 of 35 postoperatively famous personalities. This number nearly doubled when semantic cues were added, suggesting that his knowledge of the names was not limited to perceptual information, but was incorporated in a semantic network capable of supporting explicit recall. In forced-choice recognition, H.M. discriminated 87% of postmorbid famous names from foils. Critically, he was able to provide uniquely identifying semantic facts for one-third of these recognized names, describing John Glenn, for example, as 'the first rocketeer' and Lee Harvey Oswald as a man who 'assassinated the president.' Although H.M.’s semantic learning was clearly impaired, the results provide robust, unambiguous evidence that some new semantic learning can be supported by structures beyond the hippocampus proper."

Turk-Browne also makes the claim that because of a bad case of a  hippocampus damage, Lonni Sue Johnson was "unable to store memories."  That claim is also untrue.  Lonni Sue Johnson had very bad brain damage after a case of viral encephalitis. She was discussed at length in a book "The Eternal Now" by Michael D. Lemonick. But on page 13 of the preface of the book, we read a different claim. Instead of someone claiming that Lonni Sue Johnson could not form any new memories, we merely read that "she could no longer form new memories that she'd be able to rely on in the future, except in the most rudimentary way." This is an admission that Loni Sue Johnson could form new memories. 

We can have some skepticism about such a claim, because it is by an author trying to present a compilation of interesting cases of loss of memory, and such a person may be motivated to exaggerate memory loss, to make the story more interesting and the book more marketable. 

Lemonick's makes generalizations that the memory of Lonni Sue Johnson that we should take with some skepticism, because they are not established by formal tests. What seems to often be happening is that Lemonick is making generalizations based on limited anecdotal evidence, generalizations that may be hasty generalizations that would be disproven by extended formal testing.  

Lemonick tells us that Lonnie Sue Johnson had a big hole in the center of her head. That may have affected her recognition memory and her visual acuity.  So when we later read about Lonnie failing to recognize someone she had previously met, that is no proof of an inability to form new memories. It may be mere evidence of a visual problem or a recognition problem. 

So, for example, when Lemonick tells us on page 9 that "if she sees someone new, then sees them again a day later (or even five minutes later, as I discovered for myself), she'll have no idea that she ever saw them before," we do not actually know that this is some inability to form new memories.  It could be some mere difficulty in visual recognition or visual perception. Or, it could be that Lemonick is wrongly making sweeping generalizations based on very little data. If someone does not recognize you after meeting you a few minutes before, that is nowhere close to sufficient evidence that the person is unable to form new memories. 

The claim that Lemonick makes on page 9 that Lonni had lost memory of some of her family members is another claim that we should treat with suspicion. It may be a claim based mainly on a failure of visual recognition. Lemonick makes statements such as "she didn't know Kay, or her daughter Maya," referring to someone who was not Lonni's daughter. But what justification does Lemonick have for such claims?  How does he presume to know what a brain-damaged person did or did not know or remember when seeing some friend or her daughter?  Was the claim based merely on a failure of Lonni to recite their names after seeing them? We don't know. 

What we need here is some systematic procedure to test such a claim that a memory of someone's daughter had been lost. Such a procedure would include both a test of visual recognition, and a transcript of a long interview.  The interview might ask questions such as "Do you remember Kay?" and "Have you ever heard the voice that will speak next?" and "Do you recognize the face you will see next?" and so forth.  But we don't get details of any such a procedure.  We mainly get Lemonick kind of presuming what Lonni did or did not remember, based on thin evidence. 

On the next page (page 10) it becomes clear that the heavily brain-damaged Lonni did not lose all her memories of the past. We read that "she knows Maggi and Aline, but when you show her photographs of her aunts and uncles, she recognizes only some of them." We read that "she does know that she once had two airplanes." When Lemonick claims that Lonni does not remember her wedding day or her divorce, we should treat such claims with great skepticism, because they are not backed up by any quotes that Lemonick gives. And if you have a  quote from someone saying she does not remember some event, that does not well prove that the person has no memory of such an event. Ask the right questions at different times, and the same person may give you some details of the event. For example, ask a person about Napoleon, and she may say, "I don't remember anything about Napoleon." But then ask about Napoleon's final battle, and you may get an answer of "Waterloo." I can hardly overemphasize the importance of this point. Single-statement self-reports by a person about what he remembers on a topic may be very unreliable. And such self-reports coming from brain-damaged persons may be particularly unreliable. 

Often when a person says that he does not remember anything about some topic, it's just a way of indicating that the person does not want to be bothered with trying to recall what he remembers about that topic. Ask an adult what he remembers about the war of 1812, and there's a good chance he may say something like, "I don't remember anything about that." But ask the same person whether he remembers any fire occurring during that war, and there's a good chance the person may give an answer such as, "Yes, I remember the British burnt the White House," referring to an event of that war.  It is easy to imagine possible reasons why someone who got divorced might make some statement sounding like she does not remember her wedding or her divorce (for example, the person might be making an excuse to avoid recalling possibly painful memories of an unsuccessful marriage). 

Later on the same page when Lemonick claims that Lonni had lost "just about every other memory she'd accumulated in fifty-seven years of life," we should doubt very much that Lemonick is speaking correctly. What is the justification for such a claim? Was some formal standard test done to justify such a claim? Or is Lemonick simply making guesses about what Lonni remembers?

I made a search on Google Scholar for scientific papers referring to Lonni Sue Johnson. I was unable to find a single scientific paper that mentioned her using that name. I did find an article by a science writer, one claiming that Lonni "could not form new memories." That does not match the previously quoted statement by Lemonick suggesting that Lonni could form "rudimentary" new memories. The article presents no data supporting this claim that Lonni Sue Johnson could not form new memories. The article refers to "published studies of her memory after the viral attack."  We have some citations at the end of the paper. Only one of the papers cited refers to Lonni Sue Johnson, using the initials L. S. J.  The paper is behind a paywall, but the abstract of the paper does not mention any inability of this L. S. J. to form new memories. 

The claim in the article above does not match what we are told in a 2016 Johns Hopkins article, which merely says that Lonni Sue Johnson had a "severely restricted ability to learn new facts," which is different from being unable to learn new facts. 

The video here shows a picture of Lonni Sue's brain, showing severe damage, with the black areas being places hollowed out by the virus:


We hear a narrator (whose claims should be taken with skepticism) claiming that Lonni Sue lost almost all of her memories. But around the 2:50 mark we hear Lonni Sue successfully and very rapidly reciting all of the letters in the alphabet in correct order, performance seemingly incompatible with the narrator's claim. Around the 3:00 mark the narrator says Lonni Sue has "a little ability to create new memories," contrary to Turk-Browne's claim that she could not form new memories.  At the 3:28 mark we see Lonni Sue playing what looks like a violin very well (the instrument seems to be a viola). Around the 5:36 mark we see very good drawings Lonni Sue made after her brain damage. 

Around the 5:56 mark we hear Lonni Sue speaking like a normal person, saying art "is a language, a visual language, that you can reach everyone of every nationality," and that "writing is fun too." Around the 8:30 mark we hear Lonni Sue sing-speaking, singing in an apparently improvised melody. 

Another video on Lonni Sue has the phony title "The Woman Who Lost All Her Memories," a title not matching the facts of the case described above. The video provides no evidence to support the claim that Lonni Sue could not form new memories. 

In this case we are lacking any systematic evidence for the claim that Lonni Sue Johnson could not form new memories. For such a claim to be made with credibility, we would need something like a transcript of a long interview, or the results of hours of systematic testing. A failure of someone to recognize a person they had met before is not good evidence of an inability to form new memories.  Such a failure could be due to visual processing defects and visual recognition problems that are mainly related to vision rather than memory. 

Remembering that old slogan "extraordinary claims require extraordinary evidence," we can translate the slogan to mean "you should have very strong evidence before making a claim of the extraordinary."  The claim that Hubert Pearce had extrasensory perception is an extraordinary claim, but it is backed up very well by many hundreds of hours of very careful tests that Professor Joseph Rhine did with Hubert Pearce (tests described here).  The claim that Alexis Didier had powers of clairvoyance is an extraordinary claim, but it is supported by endless successful tests performed of Alexis Didier, some of which are described here. No one should be making a claim that Lonni Sue Johnson could not form new memories unless they have very strong evidence to back up such a claim, such as a long, detailed scientific paper giving the results of very careful tests of her. Such evidence seems nowhere to be found. 

Lack of motivation by someone asked a question is usually a more plausible explanation for a lack of an answer than some explanation of an inability to learn. Claims of an inability to learn would be far more convincing if they were backed up by careful tests repeated many times, in which subjects were strongly motivated to perform highly.  You can imagine all kinds of ways to motivate better performance, such as offering 300 dollars for each item recalled.

memory test
He may be wrongly reported as having antegrade amnesia

In my next post in a few days I will discuss other misstatements in the Scientific American article, in which we hear a discussion of   neuroscience experiments done on infants, experiments I will criticize as being goofy and reckless.  We should always remember that when it comes to cases of memory difficulties, the world of neuroscience literature is a "give them an inch, and they'll take a mile" affair. We should remember that there is a very strong incentive for people to make cases of memory difficulties sound worse than they are, because such claims may increase citation counts and book sales and video viewership, in a way that leads to greater profit or success for someone engaging in such exaggerations, and because such claims may be made by those very eager to conjure up claims that may seem to support "brains store memories" dogmas. 

Sunday, August 3, 2025

There Is No Robust Evidence That Neuron Firing in the Brain's Frontal Lobes Is Information Processing

 On the day I am writing this post I see in some news article a typical statement about the brain. Someone states this

"Our brains process information via a vast network containing many millions of neurons, which can each send and receive chemical and electrical signals. Information is transmitted by nerve impulses that pass from one neuron to the next, thanks to a flow of ions across the neuron’s cell membrane. This results in an experimentally detectable change in electrical potential difference across the membrane known as the 'action potential' or 'spike.' "

But is this an accurate description of what is occurring in the frontal lobes of the brain? It would seem not. Neurons are continually firing in the frontal lobes of the brain, but there is no evidence that such firing is information processing or information transmittal in the sense of data or knowledge being passed around. 

It is true that a certain type of information processing is occurring throughout the brain. The chromosomes of all cells (including neurons) contain DNA, and when that DNA is read, it can be considered a form of information processing. But such information processing is something different from the firing of neurons.

Neuron firing may involve some information processing in two areas of the brain: the occipital lobe and the parietal lobe. These two regions are shown below (the parietal lobe in yellow and the occipital lobe in green, at the back). The occipital lobe is connected to the eyes by the optic nerve, so you might say that this lobe at the back of the brain processes information received from the eyes. The parietal lobe receives inputs such as touch sensations and pain sensations, so you might say that this lobe processes such inputs. 



But what about the frontal lobes at the front of the brain? (I will refer to lobes because there is one such lobe in each hemisphere or half of the brain.) The neurons in the frontal lobes are always firing, at a rate of about 1 action potential per second, to as high as 100 or more action potentials per second. Is there any adequate warrant for claiming that such neuron firing is an example of information processing? There is not. 

We might have a reasonable basis for calling such neuron firing in the frontal lobes "information processing" if someone had been able to decipher some kind of code corresponding to such neuron firing. But no one has been able to do any such thing. There has been all kinds of speculation about some kind of coding system that might be used by firing neurons to transmit information. But that has all been mainly  nothing but speculation. No robust evidence has ever been produced that the firing of neurons in the frontal lobe is any kind of real information processing or information transmittal.  When scientists have tried to produce evidence for such a thing, the results are merely pareidolia, like someone claiming to see the face of Jesus in his toast. 

But, it may be claimed, don't we know that the frontal lobes produce human thought, and does not such human thought qualify as information processing? No, we do not actually know that the frontal lobes of the brain or any part of the brain produce human thought. 

When fMRI scans are taken of the brain during cognitive activity, no strong evidence is produced backing up claims that the frontal lobes of the brain are some "seat of thought." Such scans typically show variations from region to region of only about 1 part in 200. The little regions with 1 part in 200 greater difference are scattered around the brain. Such fMRI scans are actually consistent with the claim that the brain is not the source of human thinking or cognition. because the variations are no greater than you might expect from chance variations. But you might think otherwise after looking at one of those "lying with colors" visuals that tries to make regions that differ by only 1 part in 200 look like they differ by some substantial percentage. 

It is part of the dubious folklore of neuroscientists that the prefrontal cortex or frontal lobes are some center of higher reasoning. But the scientific paper here tells us that patients with prefrontal damage "often have a remarkable absence of intellectual impairment, as measured by conventional IQ tests." The authors of the scientific paper tried an alternate approach, using a test of so-called "fluid" intelligence on 80 patients with prefrontal damage. They concluded "our findings do not support a connection between fluid intelligence and the frontal lobes." Table 7 of this study reveals that the average intelligence of the 80 patients with prefrontal cortex damage was 99.5 – only a tiny bit lower than the average IQ of 100. Table 8 tells us that two of the  patients with prefrontal cortex damage had genius IQs of higher than 140.

In a similar vein, the paper here tested IQ for 156 Vietnam veterans who had undergone frontal lobe brain injury during combat. If you do the math using Figure 5 in this paper, you get an average IQ of 98, only two points lower than average. You could plausibly explain that 2 point difference purely by assuming that those who got injured had a very slightly lower average intelligence (a plausible assumption given that smarter people would be more likely to have smart behavior reducing their chance of injury). Similarly, this study checked the IQ of 7 patients with prefrontal cortex damage, and found that they had an average IQ of 101.

It also should be remembered that brain-damaged patients taking standard IQ tests may have higher intelligence than the test score suggests.  A standard IQ test requires visual perception skill (to read the test book) and finger coordination (to fill in the right answers using a pencil). Brain damage might cause reduced finger coordination and reduced visual perception unrelated to intelligence; and such things might cause a subject to do below-average on a standard IQ test even if his intelligence is normal.  

Using the term "neoplasms" to refer to brain tumors, the 1966 study here states, "Taken as a whole, the mean I.Q. of 95.55 for the 31 patients with lateralized frontal tumors suggests that neoplasms in either the right or left frontal lobe result in only slight impairment of intellectual functions as measured by the Wechsler Bellevue test."  

In the paper "Neurocognitive outcome after pediatric epilepsy surgery" by Elisabeth M. S. Sherman, we have some discussion of the effects on children of hemispherectomy, surgically removing half of their brains to stop seizures. Such a procedure involves a 50% reduction in the total volume of the frontal lobes of the brain, and a 50% reduction of the prefrontal cortex. We are told this:

Cognitive levels in many children do not appear to be altered significantly by hemispherectomy. Several researchers have also noted increases in the intellectual functioning of some children following this procedure....Explanations for the lack of decline in intellectual function following hemispherectomy have not been well elucidated. 

Referring to a study by Gilliam, the paper states that of 21 children who had parts of their brains removed to treat epilepsy, including 10 who had surgery to the frontal lobe, none of the 10 patients with frontal lobe surgery had a decline in IQ post-operatively, and that two of the children with frontal lobe resections had "an increase in IQ greater than 10 points following surgery." 

The paper here gives precise before and after IQ scores for more than 50 children who had half of their brains removed in a hemispherectomy operation.  For one set of 31 patients, the IQ went down by an average of only 5 points. For another set of 15 patients, the IQ went down less than 1 point. For another set of 7 patients the IQ went up by 6 points. 

A writer at Slate.com states the following

"And victims of prefrontal injuries can still pass most neurological exams with flying colors. Pretty much anything you can measure in the lab—memory, language, motor skills, reasoning, intelligence—seems intact in these people."

In 1930 a patient listed as Joe A. in the medical literature underwent a bilateral frontal lobectomy performed by Dr. Walter Dandy, who removed almost all of his frontal lobes. An autopsy in 1949 confirmed that "both frontal lobes had been removed." The paper describing the autopsy said that from 1930 to 1944 Joe A.'s behavior was "virtually unchanged." On page 236 of this source, we read that Dandy said this of three patients including Joe A.: "These three patients with the extirpation of such vast areas of brain tissue without the disclosure of any resulting defect is most disappointing." I could see how it would be disappointing for someone hoping to prove a connection between some brain area and intellectual function. Page 237 of the same source tells us that on casual meeting Joe A. appeared to be mentally normal.  Page 239 of this source states this about Joe A, summarizing the findings of Brickner.:

"Nor was intellectual disturbance primary. The frontal lobes played no essential role in intellectual function; they merely added to intellectual intricacy, and ' were not intellectual centers in any sense except, perhaps, a quantitative one.' " 

A 1939 paper you can read here was entitled "A Study of the Effect of Right Frontal Lobectomy on Intelligence and Temperament." A patient C.J was tested for IQ before and after an operation removing his right frontal lobe. He had the same IQ of 139 before and after the operation. Page 9 says the lobectomy "produced no modification of intellectual or personality functions."  On page 10 we are told this about patients having one of their frontal lobes removed:

"Jefferson (1937) reported a series of eight frontal lobectomies in which the patients were observed for intellectual and emotional deficits following operation. There were five cases of right frontal lobectomy, three of whom were living and well when the article was written. It could be stated definitely that in two of the three cases there were no abnormalities which could be noted by the surgeon, patient, or family, and while the third case showed a mild memory defect, the operation had been too recently performed to judge whether or not the loss would be permanent. The three cases of left frontal excision likewise showed no significant changes, but comment was made that one patient was slightly lacking in reserve, another remained slightly facetious, and the third, who suffered a transient post-operative aphasia, appeared a trifle slow and diffident."

The following excerpt from a scientific paper tells us of additional cases of people who did not seem to suffer much mind damage after massive damage to the frontal lobes or prefrontal cortex. Resection is defined as "the process of cutting out tissue or part of an organ."

"Several well-documented patients have been described with a normal level of consciousness after extensive frontal damage. For example, Patient A (Brickner, 1952) (Fig. 2A), after extensive surgical removal of the frontal lobes bilaterally, including Brodmann areas 8–12, 16, 24, 32, 33, and 45–47, sparing only area 6 and Broca's area (Brickner, 1936), 'toured the Neurological Institute in a party of five, two of whom were distinguished neurologists, and none of them noticed anything unusual until their attention was especially called to A after the passage of more than an hour.'  Patient KM (Hebb and Penfield, 1940) had a near-complete bilateral prefrontal resection for epilepsy surgery (including bilateral Brodmann areas 9–12, 32, and 45–47), after which his IQ improved. Patients undergoing bilateral resection of prefrontal cortical areas for psychosurgery (Mettler et al., 1949), including Brodmann areas 10, 11, 45, 46, 47, or 8, 9, 10, or 44, 45, 46, 10, or area 24 (ventral anterior cingulate), remained fully conscious (see also Penfield and Jasper, 1954Kozuch, 2014Tononi et al., 2016b). A young man who had fallen on an iron spike that completely penetrated both frontal lobes, affecting bilateral Brodmann areas 10, 11, 24, 25, 32, and 45–47, and areas 44 and 6 on the right side, went on to marry, raise two children, have a professional life, and never complained of perceptual or other deficits (Mataró et al., 2001)."

Apparently patient KM got smarter after they took out most of his prefrontal cortex. That's a case helping to show that brains don't make minds. The book here discusses intelligence tests done on patients who underwent surgery on the frontal lobes:

"It was natural that the effect of an injury on the frontal lobes, said to be concerned with the higher functions of men, should be measured by these tests of intelligence. The absence of marked effects on mental ability, as measured by these intelligence tests, was, not surprisingly, felt to be puzzling." 

This paper here describes a case of a "modern Phineas Gage": a patient C.D. who suffered massive prefrontal  damage after a penetrating head injury. But C.D's IQ after the injury was measured at 113, well above average.  His verbal IQ after the injury was 119, in the 90th percentile. We read:

"C.D. reported that he did not have any cognitive or emotional problems following the accident. In describing how his thinking skills were completely unaffected, C.D. stated that, 'all the shattered bone was caught in the gray matter in front of the brain.' " 

The paper also tells us, "C.D.’s performances on memory tests were all in the average to above-average ranges in terms of the traditional measure of level of correct responses." 

There is another case of this type. The case is particularly interesting because it helps to discredit the claim that the frontal lobes of the brain are necessary for language and thinking.  The case is reported in the paper "Early bilateral and massive compromise of the frontal lobes."  We read about an amazing case of an 8-year-old girl with good mental skills despite having basically no frontal lobes of the brain. We see an MRI scan showing a gigantic black empty region in the brain corresponding to missing frontal lobes.

We read that a brain scan at age three revealed this:

"GC's first report of frontal compromise at age three. MRI scans revealed no structures in the frontal lobe, covered with cerebrospinal fluid. Weighed-T1 MRI scans showed no recognizable frontal structures, expect for a small portion of the ventral frontal cortex. The mesencephalon, pons, and medulla oblongata were present, and so were all other lobes and the cerebellum."  

We read that a brain scan of the girl at age 8 showed basically the same results, with at most only a tiny of the frontal lobes existing. 

Under a heading of "Neurological and neuropsychological assessment" we read that "she could describe sensory and affective experiences, and reacted to environmental events with apparent emotional and cognitive congruency (e.g., pleasure, tiredness, playfulness, anger, and basic symbolization  Supplementary Video 1Supplementary Video 2)." The links take us to a page of videos of the young girl. We see her seeming to act pretty much like a normal girl of her age. I recommend watching all of the short videos on the page. The girl with basically no frontal lobes stands, dances, seems to speak normally, and responds to requests to touch parts of her body, and show how she brushes her teeth. The person asking the questions to the girl speaks very rapidly, but the girl seems to have no difficulty understanding the questions, and the girl makes appropriate verbal and manual responses.  Asked to point to the questioner's thumb, the girl points to the right spot. Asked to point to the girl's eyes, the girl points to the right spot. Asked where she would wear a pair of glasses, the girl points to her eyes. Asked where she would wear a pair of shoes, she points to her feet. The girl is able to distinguish between herself and a fantasy character (Minnie Mouse), and says that she is not Minnie (Video 9). 

We see below a visual of the girl's brain, from Supplementary Video 11:

A much better title for the paper would have been "Good cognitive performance despite loss of the frontal lobes."

In this post I have taken selected excerpts from my much longer post "Reasons for Doubting Thought Comes from the Frontal Lobes or Prefrontal Cortex," which you can read here. The post includes most of the evidence discussed above, along with a discussion of many other papers and cases that collectively provide a very strong basis for rejecting common claims that thought or cognition comes from the frontal lobes or prefrontal cortex of the brain. Other posts of mine very relevant to this discussion are my posts discussing how all parts of the brain have an abundance of many types of severe signal noise. 

The truth is that we have no basis for claiming that the neurons firing in the brain's front lobes are either any type of information processing or any type of computation or any type of thinking. What we know is consistent with the idea that the firing of neurons in the frontal lobes is mere noise, no more examples of information processing than the arising of bubbles on the surface of a boiling soup. 

Brain Is Not Like a Computer


Thursday, July 31, 2025

Translate Interface Bug Corrected

 I was sad to discover that with the Blogger settings I had been using, it was not even possible for the users of some languages to switch to their preferred language. The problem has been fixed, and using the Translate button at the top right, you should be able to translate this blog to any desired language. 

Wednesday, July 30, 2025

Hyperpolyglots Intensify the Explanatory Shortfalls of "Brains Make Minds" Claims

 99% of all adult males have brains about the same size, and 99% of all adult females have brains about the same size. The average brain weight of a male is about 1336 grams, and the average brain weight of a female is about 10-15% smaller. Whenever there is a difference of some type of human mental ability that is more than 200%, this is great problem for those claiming that brains make minds -- because such a difference does not correspond to any difference in brain size or brain speed. 

One such difference is a difference in intelligence. A Reader's Digest article describes several people who had an IQ of more than 200, including William Sidis, who enrolled at Harvard at age of 11, and graduated at the age of 16. There is no evidence that having an IQ above 200 is associated with having a much larger brain. 

Then there are differences in creativity. I know of no test giving a numerical score for creativity. But anyone well-educated about the history of music, art and literature will realize that certain humans have had a level of creativity vastly exceeding that of the average man. Examples that come to mind are Shakespeare, Picasso, Mozart, Wagner, Goethe, Edison, Michelangelo, Tolstoy, Tchaikovsky and Beethoven.  None of these figures seems to have had a brain much larger than average. 

Then there are huge differences in episodic memory. Late in the 20th century there came to the attention of psychologists that a certain small number of people have levels of episodic memory dramatically better than the great majority of humans.  People with such a memory are now said to have Highly Superior Autobiographical Memory or HSAM, also called hyperthymesia. People with such an ability can typically remember details of almost every day in their adult lives. A well-studied example is the case of Jill Price. The reality of Highly Superior Autobiographical Memory had actually been documented as early as the late nineteenth century, as I document in my post here on the case of Daniel McCartney. A modern experiment showed those with Highly Superior Autobiographical Memory scoring 25 times higher on a random dates test. 

There are also huge differences in the mental calculation ability of humans. Many prodigies with normal brains (and sometimes damaged brains) have various types of extraordinary calculation ability. A widely documented ability is called calendar calculation, and consists of the ability to very quickly name the day of the week, given any date in a period that might go back 100 years or even longer, perhaps 200 years. Daniel McCartney had such an ability, as did many others, including many with autism. 

There are also huge differences in the ability of humans to recall facts or bodies of text. Many prodigies with normal brains have an ability to recall factual information to a degree many times greater than the average person. My post here documents such an ability in Daniel McCartney.  A more impressive case was that of Kim Peek, who supposedly could recall everything he had read in more than 7000 books. Then there are very many cases of people who have memorized word-for-word hundreds of pages of text, such as the entire Quran. Such people seem to have normal brains, but a memorization ability many times greater than the average person. 

Then we have the cases of what are called hyperpolyglots. These are a small number of people who can fluently speak many different languages, more than 10. An article in the New Yorker describing such people is entitled "The Mystery of People Who Speak Dozens of Languages." We read about Luis Miguel Rojas-Berscia, who can supposedly speak "fluently" 13 languages, while having a "command" of 22 languages:

"He is a hyperpolyglot, with a command of twenty-two living languages (Spanish, Italian, Piedmontese, English, Mandarin, French, Esperanto, Portuguese, Romanian, Quechua, Shawi, Aymara, German, Dutch, Catalan, Russian, Hakka Chinese, Japanese, Korean, Guarani, Farsi, and Serbian), thirteen of which he speaks fluently. He also knows six classical or endangered languages: Latin, Ancient Greek, Biblical Hebrew, Shiwilu, Muniche, and Selk’nam, an indigenous tongue of Tierra del Fuego, which was the subject of his master’s thesis."

We read in the article a reference to Giuseppe Gasparo Mezzofanti (17 September 1774 – 15 March 1849), who was famed for his ability to speak more than 30 different languages.  We read of "Corentin Bourdeau, a young French linguist whose eleven languages include Wolof, Farsi, and Finnish; and Emanuele Marini, a shy Italian in his forties, who runs an export-import business and speaks almost every Slavic and Romance language, plus Arabic, Turkish, and Greek, for a total of nearly thirty."

A scientific paper says this about hyperpolyglots:

"There are many multilingual talented language geniuses in ancient and modern China and abroad (see Erard, 2012; Hyltenstam 2016, 2018, 2021; Adriana & Birdsong, 2019). Some famous examples include Popes John Paul II and Benedict XVI, as well as writers James Joyce, Tolkien, and Anthony Burgess, and also professional linguists such as Rasmus Kristian Rask, who is believed to speak 25 languages and can read in 35 languages. Griffiths & Soruç (2020) noted that Professor Andrew Cohen, an expert on learning strategies in the field of second language acquisition (SLA), is also an authentic hyperpolyglot who is proficient in 13 foreign languages including Chinese through self-study. Cohen has also presented and published related articles at international conferences about his multilingual talents (Cohen & Li, 2013). In addition, there are many polyglots who are diplomats, the most famous being Emil Krebs who mastered 68 languages in speech and writing and studied 120 other languages (Wikipedia)....Tim Keeley from the School of Intercultural Management, Kyushu Sangyo University in Japan is proficient in more than 30 languages, making him a real hyper-polyglot by all measures."

You can make a generalization about most of these cases of extraordinary human mental abilities. The generalization is that there are certain rare humans who have special mental abilities in which the average ability of a human is exceeded by more than ten-fold. Specifically:

  • The ability of the best hyperpolyglots to speak languages is not merely twice as good as that of the average person, but more than ten times as good; for instead of being able to speak only 1 language, they can speak more than ten. 
  • The ability of the best mental calculation aces to do math calculations or calendar calculations without aid of electronic devices, paper, pencils or blackboards is not merely twice as good as that of the average person, but more than ten times as good. Mental calculation aces such as Jacques Inaudi and Zerah Colburn could outperform average people by a factor of 1000% or more. 
  • The ability of the best memorization marvels to memorize large bodes of text is not merely twice as good as that of the average person, but more than ten times as good
  • The ability of those with Highly Superior Autobiographical Memory to remember events from their past is not merely twice as good as that of the average person, but more than ten times as good. modern experiment showed those with Highly Superior Autobiographical Memory scoring 25 times higher on a random dates test. 
  • The ability of the best ESP test subjects to perform well on tests of telepathy (subjects such as Hubert Pearce and the woman tested in the Riess ESP test) is not merely twice as good as that of the average person, but more than ten times as good. 
None of these realities is expected under the claim that the brain makes the mind. The largest human brains are only slightly larger than average-sized brains. There are no humans with exceptional abilities and brains twice as large or three times as large as average. In fact, in none of the cases discussed above is there any very substantial brain difference that can account for the huge difference in ability. For example, people with Highly Superior Autobiographical Memory do not have much bigger or faster brains; people who can memorize long books do not have bigger brains; hyperpolyglots do not have bigger brains; and mental math marvels do not have larger or faster brains. In fact, in many cases we see brain damage that corresponds to the exceptional ability. For example, Kim Peek was a memory and mental math marvel who could instantly tell the day of week of any day in his life, and who could recall each of 7000 books he had read. But the same man was born without the corpus callosum that connects the two hemispheres of the brain. 

Sunday, July 27, 2025

Hopfield Networks Do Nothing to Explain How a Human Could Remember or Recognize Anything

 Humans have astonishing capabilities for recognizing many different types of things: faces, individual words, quotations, places, musical compositions, and so forth. There is no credible neural explanation for how recognition occurs. There is no robust evidence for any neural correlate of recognition. Brains do not look or act any different when you are recognizing something. For example:

  • The year 2000 study "Dissociating State and Item Components
    of Recognition Memory Using fMRI" found no difference in brain signals of more than 1 part in 100, with almost all of the charted differences being only about 1 part in 500. 
  • The study "Remembrance of Odors Past: Human Olfactory Cortex in Cross-Modal Recognition Memory" found no difference in brain signals of more than 1 part in 200.
  • The study "Neural correlates of auditory recognition under full and divided attention in younger and older adults" found no difference in brain signals of more than 1 part in 500.
  • The study "Neural Correlates of True Memory, False Memory, and Deception" asked people to make a judgment of whether they recognized words, some of which they had been asked to study. The study found no difference in brain signals of more than about 1 part in 300.
  • The study "The Neural Correlates of Recollection: Hippocampal Activation Declines as Episodic Memory Fades" was one in which "participants performed a recognition task at both a short (10-min) and long (1-week) study-test delay." The study found no difference in brain signals of more than about 1 part in 300.
  • The study "The neural correlates of everyday recognition memory" found no difference in brain signals of more than about 1 part in 500.
  • The study "Neural correlates of audio‐visual object recognition: Effects of implicit spatial congruency" was one in which participants attempted a recognition task. The study found no difference in brain signals of more than about 1 part in 200.
Some have claimed that there is something in the brain called a "fusiform face area" that is more active when you are recognizing faces. Such a claim is not well-founded, for reasons I discuss in my post here

But some claim there is some theoretical basis for a little understanding of how a brain could recognize something. For example, the recent paper "Computational models of learning and synaptic plasticity" by neuroscientist Danil Tyulmankov is one of numerous pieces attempting to claim that computer science work provides models shedding insight on how a brain might learn. Such claims are unfounded because of the vast physical differences between what is going on in brains and what goes on inside computers. 

On page 7 of his paper Danil Tyulmankov gives us a typical example of someone trying to trick us into thinking that some computer software technique has some relevance to explaining how a brain could recognize something. Under a heading of "Memory paradigms" and subheadings of "Recall" and "Associative Memory" he states this:

"The colloquial use of 'memory' commonly refers to declarative memory (also called explicit memory) – the storage of facts (semantic memory) or experiences (episodic memory) – which requires intentional conscious recall. One of the most influential models of recall is the associative memory network (Figure 1a), also known as the Hopfield network (Hopfield, 1982). The model’s objective is to store a set of items ... such that when a perturbed version ... of one of the items is presented, the network retrieves the stored item that is most similar to it. For example, given a series of images, as well as a prompt where one of the images is partially obscured, the network would be able to reconstruct the full image. More abstractly, given a series of lived experiences, this may correspond to a verbal prompt to recall a piece of semantic or autobiographical information." 

We have here the typical shenanigans of one of the persons trying to conflate human memory and computer memory, something made rather easy by the fortunate happenstance that the same word ("memory") is used for two completely different things (human memory and computer memory).  Tyulmankov has given us above a paragraph that starts out with a sentence referring to human memory; he then refers to a purely computer software method with no relevance to human memory; and he then ends the paragraph with another sentence referring only to human memory.  It's kind of like someone trying to make you get the impression that the president of the USA is a dog, by having the first sentence of his paragraph referring to dogs, having the second and third sentence of his paragraph referring to the president of the USA, and then having the last sentence of his paragraph again referring to dogs. 

Let me explain some reasons why Hopfield networks do nothing to explain how a human could remember or recognize anything. Hopfield networks are groups of nodes in which each node has a connection to each of the other nodes in the group. The diagram below illustrates a very simple Hopfield network. Each of the circles is called a node. A Hopfield network might have any number of nodes. In the Hopfield network, the different connections between the nodes might have different numerical values called "strengths." 

Hopfield network

Now, if you search on the Internet, you can find various examples of 
programming code that uses Hopfield networks to store and retrieve information. Sometimes while giving such examples, it is claimed that the code has some relevance to explaining how a brain could remember something. We are sometimes told that Hopfield networks have some relevance to the brain, because just as individual neurons in the brain can each be connected to many other neurons because of synaptic connections, each node in a Hopfield network is connected to each node in the network. To play up the similarity, the nodes of a Hopfield network are sometimes called "neurons," even though such a term is profoundly misleading, because of reasons I will explain below. 

There are, however, very strong reasons why Hopfield networks have no relevance to explaining how a brain could remember something. They are listed in my visual above. I will explain each. 

Reason #1: Neurons do not have any capacity for storing some learned piece of information such as an image, a number or a word. 

In a Hopfield network particular nodes of the network may store some item of information. But a neuron does not have any capacity that we know of for storing some item of learned information. No one has ever found an item of learned information by examining a neuron. Very much tissue has been extracted from the brains of living people, and no one ever found in a neuron something like the letter "A" or the word "cat" or the number "1776."  No one has ever found even a single number such as 0 or 1 stored within a neuron. 

Neurons also have no ability to function as binary switches similar to the light switches controlling whether a light is on or off.  A neuron fires at a varying rate, with very much variation from one minute to the next. There is nothing in a neuron that flips between a permanent "off" state and a permanent "on" state.  So even attempts to depict individual neurons as storing a value of 0 or 1 are invalid. Neurons are not like binary switches. 

The page here provides code for a Hopfield network, using the term "neuron" to describe the nodes of the network. It states, "Each neuron in the network represents a binary unit that can have a state of either +1 (active) or −1 (inactive)."  That does not correspond to the physical reality of neurons over any long time scale.  Over the course of a few seconds, a neuron can switch between between being active and inactive. But over a time span such as days, neurons do not switch between some active state and an inactive state. All neurons are electrically active over a time span of 24 hours. So it is not accurate to imagine some situation persisting over a long time in which one neuron corresponds to a 0, and another neuron corresponds to a 1. Neurons fire at a rate between 1 time per second and 200 times per second, and such firing rates vary unpredictably. 

So as simple a storage task as the storage of the word "dog" cannot occur through some method like that imagined above.  The word "DOG" corresponds to the ASCII numbers 68, 79 and 71, and those three digits correspond to the binary sequence 10100111111101100011.  But we can imagine no group of about 20 neurons storing the binary sequence 0100111111101100011 over a long period such as months, because there can be no situation in which some neurons are inactive over a period of months (corresponding to 0) while other neurons are active over months (corresponding to 1).  All neurons are continually active, and neurons do not have any switch-like feature that could enable binary information storage. Plus there's the fact that the brain has no such thing as an ASCII chart allowing a conversion between letters of the English alphabet and decimal numbers. 

Reason #2: Unlike Hopfield networks in computer software, the connections between neurons are noisy and unreliable

Some programming code using Hopfield networks will typically rely on a simple retrieval procedure in which information is extracted across the network with 100% reliability. That does not correspond to the situation in the brain. Almost all connections in the brain require signals passing across chemical synapses. But chemical synapses do not reliably transmit signals across synapses. Scientific papers say that each time a signal is transmitted across a chemical synapse, it is transmitted with a reliability of 50% or less.  A paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower." 

What this means is that computer programs using a Hopfield network to retrieve information are not realistically simulating the brain. Were you to modify such programs to realistically simulate the unreliable synaptic transmission in the brain, such programs would no longer be able to achieve their functions of information retrieval or recognition.

Reason #3: A group of neurons is a "fuzzy boundary" thing that does not make a closed network that can be traversed from beginning to end

To understand this reason, let us look at how neurons are arranged in the brain. A typical neuron has very many synapses that connect it to other neurons.  It has been estimated that the brain has about 100 billion neurons, and about 100 trillion synapses. This means the average neuron has about 1000 synapses, each of which is a connection between that neuron and other neurons. All those synapses lock a neuron in place at a particular location, just as the roots of a tree in a dense forest lock that tree into a particular location in the forest.  

The visual below (from the site here) shows some neurons in the brain.  The colors are artificial, supplied to show individual neurons. 


When I search for information on the average distance between neurons, compared to the average size of a neuron, I am told (a) that the average size of the soma at the center of a neuron is about 10-25 micrometers (millions of a meter), and that the average distance between neurons is also about 25 micrometers. So neurons are densely packed in the brain, rather like in the artistic depiction below. 


Now, there is a great problem with any spherical volume of neurons looking like the neurons above.  The problem is that such a volume has no particular spot or neuron that is its beginning, and no particular spot or neuron that is its end. So the volume of neurons cannot be traversed from its beginning to end. For any particular neuron connected to about 1000 other neurons, there is no such thing as a "next neuron" and no such thing as a "previous neuron." 

But a traversal from a beginning to an end is a crucial part of all programming that utilizes Hopfield networks. Traversal from a beginning to an end is crucial to the very idea of a Hopfield network. A Hopfield network does not correspond to a group of neurons, which has a fuzzy boundary and is not like a closed network with a beginning and an end. 

Reason #4: Because of high levels of synaptic remodeling and the short lifetimes of synapse proteins ( < 4 weeks), the strength of connections between neurons rapidly vary randomly.

Hopfield networks include a "weight matrix" that is touted as something similar to the connection between neurons. But in such networks this "weight matrix" is a stable thing. That does not correspond to the connections between neurons, which are ever-varying in a random way. 

Below is a quote from a scientific paper:

"A quantitative value has been attached to the synaptic turnover rate by Stettler et al (2006), who examined the appearance and disappearance of axonal boutons in the intact visual cortex in monkeys.. and found the turnover rate to be 7% per week which would give the average synapse a lifetime of a little over 3 months."

You can read Stettler's paper here2019 paper documents a 16-day examination of synapses, finding "the dataset contained n = 320 stable synapses, n = 163 eliminated synapses and n = 134 formed synapses."  That's about a 33% disappearance rate over a course of 16 days, suggesting an average synapse lifetime of less than three months.
You can google for “synaptic turnover rate” for more information. Synapses typically protrude out of bump-like structures on dendrites called dendritic spines. But those spines have lifetimes of less than 2 years.  Dendritic spines last no more than about a month in the hippocampus, and less than two years in the cortex. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the hippocampus have a turnover of about 40% each 4 days. This 2002 study found that a subgroup of dendritic spines in the cortex of mice brains (the more long-lasting subgroup) have a half-life of only 120 days. A paper on dendritic spines in the neocortex says, "Spines that appear and persist are rare." While a 2009 paper tried to insinuate a link between dendritic spines and memory, its data showed how unstable dendritic spines are.  Speaking of dendritic spines in the cortex, the paper found that "most daily formed spines have an average lifetime of ~1.5 days and a small fraction have an average lifetime of ~1–2 months," and told us that the fraction of dendritic spines lasting for more than a year was less than 1 percent. A 2018 paper has a graph showing a 5-day "survival fraction" of only about 30% for dendritic spines in the cortex.  A 2014 paper found that only 3% of new spines in the cortex persist for more than 22 days. Speaking of dendritic spines, a 2007 paper says, "Most spines that appear in adult animals are transient, and the addition of stable spines and synapses is rare." A 2016 paper found a dendritic spine turnover rate in the neocortex of 4% every 2 days. A 2018 paper found only about 30% of new and existing dendritic spines in the cortex remaining after 16 days (Figure 4 in the paper). 

Furthermore, it is known that the proteins existing between the two knobs of the synapse (the very proteins involved in synapse strengthening) are very short-lived, having average lifetimes of no more than a few days. A graduate student studying memory states it like this:

"It’s long been thought that memories are maintained by the strengthening of synapses, but we know that the proteins involved in that strengthening are very unstable. They turn over on the scale of hours to, at most, a few days."

A scientific paper states the same thing:

Experience-dependent behavioral memories can last a lifetime, whereas even a long-lived protein or mRNA molecule has a half-life of around 24 hrs. Thus, the constituent molecules that subserve the maintenance of a memory will have completely turned over, i.e. have been broken down and resynthesized, over the course of about 1 week.

The paper cited above also states this (page 6):

"The mutually opposing effects of LTP and LTD further add to the eventual disappearance of the memory maintained in the form of synaptic strengths. Successive events of LTP and LTD, occurring in diverse and unrelated contexts, counteract and overwrite each other and will, as time goes by, tend to obliterate old patterns of synaptic weights, covering them with layers of new ones. Once again, we are led to the conclusion that the pattern of synaptic strengths cannot be relied upon to preserve, for instance, childhood memories."

A paper on the lifetime of synapse proteins is the June 2018 paper “Local and global influences on protein turnover in neurons and glia.” The paper starts out by noting that one earlier 2010 study found that the average half-life of brain proteins was about 9 days, and that a 2013 study found that the average half-life of brain proteins was about 5 days. The study then notes in Figure 3 that the average half-life of a synapse protein is only about 5 days, and that all of the main types of brain proteins (such as nucleus, mitochondrion, etc.) have half-lives of 15 days or less.  The 2018 study here precisely measured the lifetimes of more than 3000 brain proteins from all over the brain, and found not a single one with a lifetime of more than 75 days (figure 2 shows the average protein lifetime was only 11 days). 

The paper here states, "Experiments indicate in absence of activity average life times ranging from minutes for immature synapses to two months for mature ones with large weights."

When you think about synapses, visualize the edge of a seashore. Just as writing in the sand is a completely unstable way to store information, long-term information cannot be held in synapses. The proteins that make up the synapses are turning over very rapidly (lasting no longer than a few weeks), and the entire synapse is replaced every few months or every several months.  Conversely, humans can reliably remember things they learned or experienced 50 or 60 years ago; and humans can recognize songs, faces, names and quotes that they have not been exposed to in 50 years. For example, lying in bed the other day, there strangely popped into my mind the name "Tobie Tyler." I recognized the name as that of a circus movie involving a boy, one I had not seen or heard mentioned in well over half a century. A Google search confirmed this (I saw the movie around 1960).

Reason #5: There is no ability in the brain to read the strength of the synaptic connections between neurons. 

In a Hopfield network as implemented in computer software code, there is an ability to read the strength of all of the connections between nodes. But the brain has no corresponding ability. The brain has nothing like a synapse strength reader. 

Computer programmers take for granted certain conveniences. Every programmer knows that if he has a data structure named DS, he can run a loop something like the code below, to sum up  the numbers stored in each part of such a data structure:

int nTotal = 0;
int i =0;
for (i = 0; i < DS.length; i++)
    nTotal = nTotal + DS[i];

But while this type of thing is a basic convenience available in the world of programming, it does not correspond to anything possible in the brain. Physically, brains have no way to run loops performing some mathematical or summation operation on each neuron or synapse in a set of neurons or synapses.  A brain cannot sum up the strengths of a set of synapses, nor can a brain even read the exact strength of some particular synapse. Similarly, your muscular system has muscles of various strengths; but there is in your body no such thing as a muscle strength reader; and there never occurs in your body anything like a loop that sums up all the strengths of the muscles in some part of your body. 

In short, while having a small amount of superficial resemblance to the arrangement of neurons and synapses, Hopfield networks and the programing code that use them do not realistically simulate the realities of neurons and synapses.  The ability of Hopfield networks to do certain tasks does nothing to show that the brain is capable of doing such tasks. 

Below is a revealing confession by a neuroscientist named Slotine: "While neuroscience initially inspired key ideas in AI, the last 50 years of neuroscience research have had little influence on the field, and many modern AI algorithms have drifted away from neural analogies."