Thursday, March 5, 2026

Neuroscientists Have No Brain-Based Explanation for Either Remembering or Forgetting

Decades ago I regarded Time Magazine as being the epitome of respectable journalism. I remembered there was a 48-story skyscraper in New York City that was called the Time-Life Building. I imagined the building stuffed with conscientious reporters and fact checkers, who would make sure that what you read in the weekly Time magazine was something you could trust. In the days of my youth, the average man might keep up on world events by watching his 6:30 Evening News broadcast, by reading his daily newspaper, and by reading his weekly copies of Life magazine and Time magazine. 

Now the Time-Life Building has been renamed, and is merely called 1271 Avenue of the Americas. Life magazine ceased publishing in the year 2000. Time magazine still publishes in print, but only 22 editions a year, rather than the previous weekly editions. Time magazine also has an online presence. But judging from its recent defective article on memory, we may wonder how trustworthy its science coverage is these days. 

The article (which you can read here) was one was a typical example of a type of article we can call a why-or-how-misspeaking article. Such an article starts out with the word "How" or "Why," and does not end with a question mark, offering an attempt to explain some thing that may have puzzled you, while offering only some unbelievable story line. The title of the article is "Why You Can't Remember Being a Toddler."  

Referring to the tendency of people to not remember their youth before age 4, the article states this:

"In recent years, scientists who study this phenomenon—sometimes called childhood or infantile amnesia—have made some surprising findings that illuminate how this nearly universal form of forgetting works. At the lab of Paul Frankland, a senior scientist at the Hospital for Sick Children in Toronto, researchers tagged the cells in the brain that were activated as young mice learned to fear a chamber. Three months later, when the full-grown mice had forgotten their fear, the researchers activated those cells again—and suddenly, the mice remembered."

All of these claims are false. No, scientists have not illuminated how forgetting of childhood memories works or how any type of forgetting works. There are no cells in the brain that are activated only when you learn something or form a memory. Almost every cell in the brain is continually active, and there is zero evidence that some type of cell "turns on" or activates only when something is learned or only when a memory is formed. The claim about the experimental result is unfounded, being based on a very poor example of junk science. 

The link in the quote above takes us to only the abstract of a paper entitled "Recovery of 'Lost" Infant Memories in Mice." An examination of the full paper shows it to be a poor piece of neuroscience. We have way-too-small study group sizes of only 7, 9 or 10 mice. No study like this should be taken serious unless at least 15 to 20 mice were used for each of the study groups. In addition, the study depends on an utterly unreliable method for trying to determine how well mice remembered: the worthless technique of judging how much a mouse moved during an arbitrary time period, and then referring to that as a "freezing percentage." For reasons discussed here, all studies based on so unreliable a method are examples of junk science. 

Here we have the use of "freezing behavior" methodology in its most unreliable and untrustworthy form, with claims being made that some mice remembered better, claims based on reports they "froze" more (in other words, moved less) after some part of their brain was zapped by optogenetic stimulation. Researchers who use this method are compounding the folly of trying to measure fear or recall by judging "freezing behavior," because it is known that zapping many different parts of the brain will itself produce "freezing behavior" even if there is no difference in fear or recall. So it's like this:

What the neuroscientist said: "I zapped the mouse's brain, and he 'froze' in the sense of moving less so I must be reactivating his forgotten fear memories which are causing him to 'freeze in fear.' "

What the neuroscientist should be saying: "I zapped the mouse's brain, and he 'froze'  in the sense of moving less, and that's probably just a response to the very action of zapping the mouse, telling me nothing about whether the mouse remembered something the mouse was trained to fear."

The title of the paper is misleading. Nothing has been done here to show "Recovery of 'Lost' Infant Memories in Mice." No decent study group sizes were used. No claim is made by the authors to have a sample size calculation to determine whether they were using adequate sample sizes. And no reliable method has been used to measure whether any of these mice remembered anything. So no evidence has been provided that any lost memories were recovered. 

Referring to the loss of infant memories, the Time article says this: "Animals whose brains tend to add smaller crops of neurons after birth—guinea pigs, for instance—do not show signs of this amnesia, Frankland and colleagues have found." This makes no sense under "brains store memories" claims. Under such ideas, you would expect that adding more neurons would tend to produce stronger memories. 

The Time article then gives us a claim based on another low-quality neuroscience paper, one using the same bad methods for judging fear, as well as a way-too-small study group size of only 8 mice. 

The Time article then gives us this make-your-head-hurt piece of silliness, stating, "However, Nick Turk-Browne at Yale University and his colleagues have managed to scan the brains of a growing number of little kids, and they’ve discovered that kids as young as a year old do appear to be forming memories, in the same way that adults create recollections of past events, called episodic memory."  The reference is to a senseless child-endangering set of experiments in which small infants have their brains scanned, without any medical justification. The experiments do not do anything to show that such infants are forming memories. There is nothing an MRI scan can ever produce that will ever show that someone is forming a memory.  And the last thing we would ever need is brain scans of infants to show that they are forming memories. The ability of an infant to learn in many ways as it progresses proves its ability to form memories. 

A child once died in an MRI accident. The use of MRI scans in healthy infants by experimenting scientists is a morally troubling affair. Some studies have suggested a cancer risk from MRI scans, and no one knows whether MRI scans in infancy increase cancer likelihood over a 70-year time span.  The younger the subject, the more objectionable it is to be exposing that subject to any unnecessary MRI scans that might increase his chance of getting cancer decades later. 

We then have this quote in which a neuroscientist makes a silly statement: 

"To get a better sense of precisely when memories are formed and forgotten, Sarah Power at the Max Planck Institute for Human Development and her colleagues built a media room where children have experiences they will never encounter in the outside world. 'One of the really important things about the task is that everything only exists inside the lab space. We wanted to make sure it was completely unique in the sense that…the contextual environments don't exist anywhere outside in the real world, so that we could know that if they did remember these associations, it could only be from the fact that they had been in the lab,' she says. They have so far observed 400 toddlers between the ages of 18 and 24 months, having them form memories of the lab space, and they intend to follow them over time. The project is still in its early stages, but  'from the preliminary data, we've been very surprised at their ability to encode and retain these episodic-like memories,' she says."

But why on Earth would someone be surprised by these results? Is it not extremely obvious that every infant has to form memories to progress as an infant normally does, learning skills such as crawling and walking and the beginnings of language? The scientists Power here sounds like someone who jumps in a swimming pool, and then says that she is very surprised to have got wet. 

The Time article then states, "It’s a mystery why our brains, and those of other mammals, forget our early lives." It thereby confesses that the article's title ("Why You Can't Remember Being a Toddler") is bogus clickbait. We then have a quote from the scientist who is senselessly brain-scanning healthy infants in MRI scanners, without medical justification. He says, "There's tons of behavioral evidence that even newborn infants are really good at aggregating statistics." Yeah, right -- and my neighbor's dog is the King of Mars.

The article and its quotes and the papers its cites may make you shake your head and ask some question such as "What has caused these people to say and do things so silly-sounding?"

Neuroscientists have been studying brains for many decades, trying to shed some light on how memory occurs. They have got nowhere, although you might think differently from all the misleading papers they have written. No neuroscientist has ever found the slightest trace of learned information by microscopically studying brain tissue.  No neuroscientist has any credible tale to tell of how a human could form a memory allowing him to later recall complex information. No neuroscientist has any credible tale to tell of how a human could ever instantly recall lots of relevant information he learned decades ago, after merely hearing a name or seeing an image (the kind of thing that happens when a child asks a parent to tell him about some famous historical figure). No scientist has any credible tale to tell of why a 70-year-old can remember in very great detail very many things that happened in his teenage years more than 50 years ago, while a typical 20-year-old cannot remember anything that happened in his first four years. 

Sunday, March 1, 2026

Don't Think It's a Theory of Brain Memory, When It's Just Vacuous Hand-Waving

 A very important skill in life is the ability to distinguish vacuous hand-waving when it happens. Vacuous hand-waving is when someone tries to make it sound like he understands something he does not understand. Such hand-waving is often characterized by empty phrases and the use of jargon used merely to create some impression of understanding. 

To illustrate the use of vacuous hand-waving. let's consider the question: how do you store patient's information at a doctor's office? We can distinguish the kind of talk we might get from a woman named Jane (who understands very well how it is done), and a man named John (who does not understand how it is done). Jane might give an answer like this:

Jane:  We store medical records the old-fashioned way, rather than doing everything by computers. We use a separate manila folder for each patient. The top edge of the folder has a blank slot. In that slot, you write the patient's name: last name, followed by a comma, followed by a first name. Whenever a new patient comes in for the first time, you have to take a blank manila folder, and write the person's name on the folder tab, last name first. You also have to get the patient to fill out one of our forms marked "New Patient Form." That form asks for the patient's name, phone number, email, health insurance type, health insurance number, and so forth. That "New Patient Form" must be put in the patient's folder. Once the patient has seen the doctor, the doctor puts his notes in the patient's folder. That way we can always know what happened with any particular patient. The new patient's folder is then added to our file shelf, and you have to be careful to put that folder, in the correct spot, using alphabetical order.  The folders are sorted in alphabetical order, by last name. But what happens if a patient comes in and says he has already visited the doctor? Then we have to retrieve his file from our file shelf.  That's easy to do, because all of the files on our file shelf are kept in alphabetical order. So once we have retrieved the patient's folder on the day he has an appointment, we give that patient's folder to the doctor, so he can add new notes to the file. Later that day, we file the patient's folder back in our file shelf, being careful to put it in the correct spot, so that alphabetical order is maintained."


It is clear from this very detailed answer that Jane is aware of an exact system for storing patient data at a doctor's office, and how such a system can meet all of the requirements for storing patient's data at that office.  The system involves no fancy technology, but at least it is clear from her answer that Jane knows exactly how the system works. But let's imagine a different answer from John, an example that is merely a case of vacuous hand waving. 

John:  "So how would you store patient's records in a medical office? That would have to be done very carefully. It would cause all kinds of problems if the data for two different patients were mixed up. It is clear that such an office would involve some type of literary specification that would allow the exact details of a patient's treatment to be preserved. The real explanation for how the storage would work is: paper accumulation. As more and more patients were seen, more and more pieces of paper would accumulate. But paper cannot be very easily copied.  So an alternative would be an electronic accumulation of data, that would allow rapid digital backups." 

Nothing in John's answer indicates that he actually understands the specifics of how you could store medical records at a doctor's office. John's answer is an example of vacuous hand-waving. It sounds like he has no understanding of basic issues such as how to create a way of storing a new patient's data, how to avoid getting the records of two patients mixed up, how to add new treatments notes for a particular patient, how to easily find the data for a particular patient, and so forth. Jane's answer shows that she knew the answers to such questions, but John's answer makes us doubt that he has any understanding of such matters. 

Now, how do neuroscientists sound when they speculate about how a brain might store or retrieve memories? Do they sound like Jane, or do they sound like John?  They always sound like John. Dictionary.com defines "hand waving" as "insubstantial words, arguments, gestures, or actions used in an attempt to explain or persuade." When neuroscientists attempt to explain memory by referring to the brain, they offer only the most hazy hand-waving. Typically what occurs is the repetition of empty slogans and catchphrases.  

For example, a neuroscientist may claim that memories are formed by "synapse strengthening." There is no substance in this claim, which is mere hand-waving. We have many examples of the storage of knowledge in human-made things such as books, drawings, computer files, messages, handwritten notes and electronic data.  Such knowledge storage never occurs through strengthening.  Instead what typically happens when knowledge is stored in books,  messages, notes and computer files is that there occurs a repetition of symbolic tokens by some kind of writing process, and the use of some encoding system in which certain combinations of symbolic tokens represent particular words, things or ideas.  That is not strengthening.  

To give another example of empty hazy hand-waving, a neuroscientist may vaguely claim that memories are formed by "the formation of synaptic patterns." There is no substance in this claim, which is mere hand-waving.  It is possible to store information by the use of pattern repetitions. For example, you might consider each word in the English language as a pixel pattern, and then say that each use of the word "dog" in a printed book is a pattern repetition. But synapses do not form any recognizable repeating patterns. And if synapses did form such patterns, there would need to exist some synapse pattern reader to read and recognize such patterns; but no such thing exists.  Instead of being anything that could consist of stable repeating patterns, synapses are unstable "shifting sands" kind of things. Synapses are built from proteins that have an average lifetime of only two weeks or less.  The maximum length of time that humans can remember things (more than 50 years) is 1000 times longer than the average lifetime of the proteins in a synapse. So synapses cannot be the storage place of memories that can last reliably for so long. 

neuroscientist hand waving

Another example of the empty hand-waving of neuroscientists in regard to memory can be found in the paper here, entitled "Why not connectomics?" We have this example of conceptually empty hand-waving about memory storage:

"Brains can encode experiences and learned skills in a form that persists for decades or longer. The physical instantiation of such stable traces of activity is not known, but it seems likely to us that they are embodied in the same way intrinsic behaviors (such as reflexes) are: that is, in the specific pattern of connections between nerve cells. In this view, experience alters connections between nerve cells to record a memory for later recall. Both the sensory experience that lays down a memory and its later recall are indeed trains of action potentials, but in-between, and persisting for long periods, is a stable physical structural entity that holds that memory. In this sense, a map of all the things the brain has put to memory is found in the structure—the connectional map."

The first sentence is groundless dogma. There is no evidence that brains "can encode experiences and learned skills in a form that persists for decades or longer."  There is merely the fact that humans can have experiences and learn skills that they remember for decades.  The beginning of the second sentence is a confession that there is no understanding of how such a brain storage of memories can happen. The authors confess that "the physical instantiation of such stable traces of activity is not known,"  The claim that memories are stored by "the specific pattern of connections between nerve cells" is empty hand-waving, and the speculation stated is unbelievable. No one who has ever studied the connections between nerve cells (neurons) has ever seen anything like some symbolic pattern that could encode a record of human experiences or human learned skills or learned conceptual knowledge such as school learning.  The brain does not have any such thing as a connection pattern reader that could read and interpret such patterns if they existed. 

Another example of utterly vacuous hand-waving by a neuroscientist can be found on the page here, where we have a neuroscientist state, "That is what learning is – forming new connections between neurons that didn’t exist before." You do not explain a storage of information by imagining new synapses forming between neurons.  A forming of new synapses between neurons is a structural effect that would require many minutes or hours, but humans can learn new things instantly. If you hear from a police officer that your child or spouse or father has died, you do not have wait for new connections to form between neurons (which would take hours). Instead you instantly form a permanent new memory of that very important fact.  

I had a medical incident this Friday, from which I have recovered.  I reported to an emergency room and reported symptoms that a good physician should have been able to diagnosis and treat by the simple use of a particular liquid. My case was bungled by some physician who sent some completely unsuitable medication to my pharmacist. "You're being discharged" was used as a phrase meaning "we don't know what your issue is,  so get out of here." The people were all very nice, but I was reminded again how biomedical authorities may blunder.

Wednesday, February 25, 2026

The "Speed Bump" Nerve Signal Bottlenecks That Make Up 90% of Your Brain Tissue

 Scientists have long advanced the claim that the human brain is the storage place for memories and the source of human thinking. But such claims are speech customs of scientists rather than things they have proven. There are numerous reasons for doubting such claims. One big reason is that the proteins in synapses have an average lifetime of only a few weeks, which is only a thousandth of the length of time (50 years or more) that humans can store memories. Another reason is that neurons and synapses are way too noisy (and synapses too unreliable signal transmitters) to explain very accurate human memory recall, such as when a Hamlet actor flawlessly recites 1476 lines. Another general reason can be stated as follows: the human brain is too slow to account for very fast thinking and very fast memory retrieval.

Consider the question of memory retrieval. Given a prompt such as a person's name or a very short description of a person, topic or event, humans can accurately retrieve detailed information about such a topic in one or two seconds. We see this ability constantly displayed on the long-running television series Jeopardy. On that show, contestants will be given a short prompt such as “This opera by Rossini had a disastrous premier,” and within a second after hearing that, a contestant may click a buzzer and then a second later give an answer mentioning The Barber of Seville.  Similarly, you can play with a well-educated person a game you can call “Who Was I?” You just pick random names of actual people from the arts or history, and require the person to identify the person within about two seconds. Very frequently a person will succeed. We can imagine a session of such a game, occurring in only ten seconds:

John: Marconi.
Mary: Invented the radio.
John: Magellan.
Mary: First to sail around the globe.
John: Peter Falk.
Mary: A TV actor.

We can also imagine a visual version of this game, in which you identify random pictures of any of 1000 famous people. The answers would often be just as quick.

The question is: how could a brain possibly achieve retrieval and recognition so quickly? Let us suppose that the information about some person is stored in some particular group of neurons somewhere in the brain. Finding that exact tiny storage location would be like finding a needle in a haystack, or like finding just the right index card in a swimming pool full of index cards. It would also be like opening the door of some vast library with a million volumes and instantly finding the exact volume you were looking for.

There are certain design features that a system can have that will allow for very rapid retrieval of information. One of these features is an indexing system. An indexing system requires a position notation system, in which the exact position of some piece of information can be recorded. An ordinary textbook has both of these things. The position notation system is the page numbering system. The indexing system is the index at the back of the book. But the brain has neither of these features. There is nothing in the brain like a position notation system by which the exact position of some tiny group of neurons can be identified. The brain has no neuron numbers, and a brain has no coordinate system similar to street names in a city or Cartesian coordinates in a grid. Lacking any such position notation system, the brain has no indexing system (something that requires a position notation system).

So how is it that humans are able to recall things instantly? It seems that the brain has nothing like the speed features that would make such a thing possible. You can't get around such a difficulty by claiming that each memory is stored everywhere in the brain. There would be two versions of such an idea. The first would be that each memory is entirely stored in every little spot of the brain. That makes no more sense than the idea of a library in which each page contains the information in every page of every book. The second version of the idea would be that each memory is broken up and scattered across the brain. But such an idea actually worsens the problem of explaining memory retrieval, as it would only be harder to retrieve a memory if it is scattered all over your brain rather than in a single little spot of your brain.

We also cannot get around this navigation problem by imagining that when you are asked a question, your brain scans all of its stored information. That doesn't correspond to what happens in our minds. For example, if someone asks me, "Who was Teddy Roosevelt," my mind goes instantly to my memories of Teddy Roosevelt, and I don't experience little flashes of knowledge about countless other people, as if my brain were scanning all of its memories.  

When we consider the issue of decoding encoded information, we have an additional strong reason for thinking that the brain is way too slow to account for instantaneous recall of learned information.  In order for knowledge to be stored in a brain, it would have to be encoded or translated into some type of neural state. Then, when the memory is recalled, this information would have to be decoded: it would have to be translated from some stored neural state into a thought held in the mind. This requirement is the most gigantic difficulty for any claim that brains store memories. Although they typically maintain that memories are encoded and decoded in the brain, no neuroscientist has ever specified a detailed theory of how such encoding and decoding could work. Besides the huge difficulty that such a system of encoding and decoding would require a kind of "miracle of design" we would never expect for a brain to ever have naturally acquired (something a million times more complicated than the genetic code), there is the difficulty that the decoding would take quite a bit of time, a length of time greater than the time it takes to recall something. 

So suppose I have some memory of who George Patton was, stored in my brain as some kind of synapse or neural states, after that information had somehow been translated into synapse or neural states using some encoding scheme.  Then when someone asks, "Who was George Patton?" I would have to not only find this stored memory in my brain (like finding a needle in a haystack), but also translate these synapse or neural states back into an idea, so I could instantly answer, "The general in charge of the Third Army in World War II."  The time required for the decoding of the stored information would be an additional reason why instantaneous recall could never be happening if you were reading information stored in your brain.  The decoding of neurally stored memories would presumably require protein synthesis, but the synthesis of proteins requires minutes of time. 

There is another reason for doubting that the brain is fast enough to account for human mental activity. The reason is that the transmission of signals in a brain is way, way too slow to account for the very rapid speed of human thought and human memory retrieval.

Information travels about in a modern computer at a speed thousands of time faster than nerve signals travel in the human brain. If you type in "speed of brain signals" into the Google search engine, you will see in large letters the number 286 miles per hour, which is a speed of 128 meters per second. This is one of many examples of dubious information which sometimes pops up in a large font at the top of the Google search results. The particular number in question is an estimate made by an anonymous person who quotes no sources, and one who merely claims that brain signals "can" travel at such a speed, not that such a speed is the average speed of brain signals. There is a huge difference between the average speed at which some distance will be traveled and the maximum speed that part of that distance can be traveled (for example, while you may briefly drive at 40 miles per hour while traveling through Los Angeles, your average speed will be much, much less because of traffic lights). 

A more common figure you will often see quoted is that nerve signals can travel in the human brain at a rate of about 100 meters per second. But that is the maximum speed at which such a nerve signal can travel, when a nerve signal is traveling across what is called a myelinated axon. Below we see a diagram of a neuron. The axons are the tube-like parts in the diagram below. The depicted axon is a myelinated axon (the faster type); but a large fraction of axons are unmyelinated (the slower type). 


neuron

The less sophisticated diagram below makes it clear that axons make up only part of the length that brain signals must travel.

neurons
Below is a depiction of these components by Google's Gemini AI:

neurons, axons, dendrites and synapses

There are two types of axons: myelinated axons and non-myelinated axons (myelinated axons having a sheath-like covering shown in blue in the diagram above). According to this article, non-myelinated axons transmit nerve signals at a slower speed of only .5-2 meters per second (roughly one meter per second). Near the end of this article is a table of measured speed of nerve signals traveling across axons in different animals; and in that table we see a variety of speeds varying between .3 meters per second (only about a foot per second) and about 100 meters per second. 

But from the mere fact that nerve signals can travel across myelinated axons at a maximum speed of about 100 meters per second, we are not at all entitled to conclude that nerve signals typically travel from one region of the brain to another at 100 meters per second. For one thing, only about half of the axons in the human cortex are myelinated, and the transmission speed of the unmyelinated axons is only about a meter per second. Moreover,  nerve signals must also travel across dendrites and synapses, which we can see in the diagrams above. It turns out that nerve signal transmission is much slower across dendrites and synapses than across axons. To give an analogy, the axons are like a road on which you can travel fast, and the dendrites and synapses are like traffic lights or stop signs that slow down your speed.

According to neuroscientist Nikolaos C Aggelopoulosthere is an estimate of 0.5 meters per second for the speed of nerve transmission across dendrites (see here for a similar estimate). That is a speed 200 times slower than the nerve transmission speed commonly quoted for myelinated axons. Such a speed bump seems more important when we consider a quote by UCLA neurophysicist Mayank Mehta: "Dendrites make up more than 90 percent of neural tissue."  Given such a percentage, and such a conduction speed across dendrites, it would seem that the average transmission speed of a brain must be only a small fraction of the 100 meter-per-second transmission in axons. 

A scientific paper from 2025 documents precise measurements of the speed of signal transmission across both axons and dendrites, in both humans and rats. The paper is entitled "Accelerated signal propagation speed in human neocortical dendrites" and can be read here

The paper gives us a speed of nerve signals (which are called action potentials) in the axons which are the fastest parts of a brain. Using the term AP to mean an action potential or nerve signal, the paper states, "We found no significant difference in the propagation speed of APs in the axons of rats and humans (rat: n=8, 0.848±0.291 m/s vs. human: n=9, 0.851±0.387 m/s, two-sample t-test: p=0.282, Figure 2F)." In that quote the paper gives an axon transmission speed of about .8 meter per second,  which is more than 100 times slower than the "100 meters per second" figure commonly cited in popular literature as the speed of brain signals. 

For the speed of signal transmission across dendrites (which make up 90%  or more of brain tissue), the paper gives us two numbers, one for what it calls "forward propagating sEPSP speed" and another it calls "back propagating AP speed." We are told that these speeds were measured:

  • "The AP propagation speed was calculated for each cell from the time difference between the somatic and dendritic APs divided by the distance between the two points. We found that the propagation speed was, on average, ~1.47 fold faster in human (rat: 0.233±0.095 m/s vs. human: 0.344±0.139 m/s, Mann-Whitney test: p=6.369 × 10–6, Figure 2F, Figure 2—figure supplement 1B)". This is a speed of about one third of a meter per second, roughly ten centimeters per second, the same as about one foot per second. The "m/s" in the quote above means meter per second. 
  • " We found that sEPSP propagation speed was, on average, ~1.26 fold faster in human (rat: 0.074±0.018 m/s vs. human: 0.093±0.025 m/s, two-sample t-test: p=0.004; Figure 2D, Figure 2—figure supplement 1D)." This is a speed of about one tenth of a meter per second, roughly ten centimeters per second, or about four inches per second. The "m/s" in the quote above means meter per second. 
In Table 2 of the paper we have five different rows marked with names such as Human1, Human2, Human3, Human4 and Human5. The last column in the table is marked "Velocity." All of the velocities listed are less than a tenth of a meter per second. The average of the five velocities is 0.085 meter second. 

Dendrites, it would appear, are sluggish bottlenecks or speed bumps (by which I mean a physical feature that slows something down).  And since it is often claimed that dendrites make up 90% or more of brain tissue, what does this tell us about whether brains are fast enough to account for instant human recall?  It tells us that brains are way too  slow to explain humans who can think at blazing fast speeds, and give the right answers instantly when asked rarely asked questions. 

Besides the slow speed of dendrites, a very important additional speed bump or bottleneck is that of synaptic delay. Synaptic delay is the fact that every time a nerve signal passes the synaptic gap of a chemical synapse, there is a delay of about .5 millisecond. In a brain with an estimated 100 trillion synapses, synaptic delay would be an enormous slowing factor. This is because nerve signals would have to travel across very many synapses, resulting in a cumulative delay that might add up to quite a few seconds or many seconds. 

cumulative synaptic delay

The diagram below shows fast, slow and no-so-fast parts of the brain. The snail symbols indicate slow parts, parts that would slow down nerve signals. The thin rabbit represents a relatively fast part. The fat rabbit represents a not-so-fast part.  Here "slow" and "fast" refers to the speed at which signals could transmit through such parts, which do not themselves move. 

fast and slow brain parts

A brain is something packed with a gazillion speed bumps or bottlenecks: the speed bumps or bottlenecks of dendrites, and the speed bumps or bottlenecks of synapses, with their delays at every synaptic junction. The brain therefore screams to us in a loud voice: "I'm way too slow to explain your instant recall." 

Postscript: 

There are two factors we can consider that will help clarify why a dendrite signal transmission speed of only about a third or a tenth of a meter is way too slow to account for instant human recall  The first factor is the total area of the human cortex. The brain tissue in the human cortex is highly folded. This means that the total area of the human cortex is surprising large, much larger than the surface area needed to make a hat. 

If you use a Google search phrase such as "total area of the human cortex," you will be told that such a surface area is between 1.5 square feet and 2.5 square feet, "roughly the size of a standard pizza." A standard pizza is about 14 inches in diameter, about the distance from a man's elbow to a ring on his finger. 

When you type in "compute the average distance between two points in a circular area" in a modern browser such as Chrome, you get an AI overview answer telling you that for a circular area with a radius of r, the average distance between two points is about .9r Using the formula above, we can (ignoring the complication of tortuosity) crudely estimate that the average distance between two random points in the cortex is about .9 times 7 inches, which is about 6 inches. 

But such a number would be a significant underestimation of the average distance that a brain signal would have to travel to go from one point to another in the cortex. The reason why it would be an underestimation is a reason called tortuosity. The word "tortuosity" refers to the fact that neural pathways are not straight lines but twisty, wiggly, squiggly lines. And it takes longer for signals to travel along such twisting lines than it does for a signal to travel along a straight line. 

The Google Gemini diagram below illustrates quite well the concept of the tortuosity of brain pathways. 


brain signal tortuosity

How much does this tortuosity factor affect the length of the pathways that brain signals must travel? You can get an estimate by typing "numerical estimate of the tortuosity of brain pathways" into Google Chrome.  This produces the answer that this tortuosity is estimated to be about 1.6.  The scientific paper here ("Extracellular space structure revealed by diffusion analysis") says that diffusion measurements show that 1.6 is the tortuosity of brain signal pathways. 

Because of the tortuosity of brain signal pathways, it seems that we should multiply the previous estimate of six inches by a factor of about 1.6.  Doing that leaves you with an estimate of about 10 inches for the average distance that a brain signal would have to travel to go from one random point to another random point in the cortex.

Such a distance may seem small, but when you are dealing with a nerve signal transmission speed of about one tenth of a meter per second (about 4 inches per second), such a distance means a delay of two seconds or more. The problem is that human recall can very often occur instantly. You can ask someone his address or telephone number or the names of his family members, and he will be able to answer instantly, without this requiring a delay of two seconds. And if you ask me, "What's the New York baseball team?" I will answer instantly. There must be a thousand questions you could answer instantly, as soon as someone finished asking them. 

Friday, February 20, 2026

Contrary to "Brains Make Minds" Claims, Brains Are Not Much More Electrically Active When You Are Awake

Those who claim that the brain makes the mind keep trying to push the silly idea that you are just a bunch of neural signals passing around inside your head. The scientists who make such claims typically are members of a belief community, a kind of sect of the ivory towers. When we hear such claims we are observing the speech customs of such a community. The members of belief communities often keep repeating the same old claims, which often are not justified by any robust evidence. 

dumb professor

It is interesting to consider this question: what would we expect if minds are produced by the mere firing of neurons? Three predictions would seem to follow from such an idea:

(1) If minds are produced by firing neurons, we would expect that neurons would fire much more frequently during conscious awareness than during unconscious sleep.
(2) If minds are produced by firing neurons, we would expect that neurons would fire much more frequently during heavy mental activity such as deep concentration, heavy calculation or rapid memory recall, than during a passive awake condition involving a mental resting state.
(3) If minds are produced by firing neurons, we would expect that when neurons fire most rapidly, that would produce the highest state of consciousness or mental activity. 

None of these predictions turns out to be true. To investigate this matter, you should ignore the type of visual shown below, a misleading visual that open appears in articles about the brain. The reality is that all of these types of brain waves occur during each of the listed states. 

misleading brain wave visuals
A misleading diagram recurring in neuroscience articles

Figure 1 of the paper here ("Firing rates of hippocampal neurons are preserved during subsequent sleep episodes and modified by novel awake experience") shows two scatter plots showing neural firing rates in the hippocampus of rats. One scatter plot is marked "Awake" and the other scatter plot is marked "Sleep." The two scatter plots look pretty identical. They both show rates of neuron firing varying from about once every ten seconds to ten times per second. 

neural firing rates awake and sleep

Judging from such graphs, neurons do not seem to fire more often in rats when they are awake. Some people claim that neurons fire less frequently during sleep, but they typically fail to give us specific figures as to how much less frequently they fire. 

Referring to readings from an EEG (a device that reads brain waves), we read this on an expert answers site:

"In REM sleep, the EEG is remarkably similar to that of the awake state (Purves et al., 2001). Although the EEG represents the synchronized activity of many neurons in the cortex, it does give us a clue whether they are firing faster or not. Wakefulness is mainly dominated by beta and gamma waves (source: Scholarpedia), i.e. 12 - 100 Hz. REM sleep is characterized by low-amplitude mixed-frequency brain waves, quite similar to those experienced during the waking state - theta waves, alpha waves and even the high frequency beta waves more typical of high-level active concentration and thinking, i.e. 4-30 Hz (table 1) (source: Sleep)."

When I ask Google "how much do average neuron neuron firing rates vary between sleep and wakefulness," I get the AI overview answer below:

"Average neuron firing rates decrease by approximately 30-40% during non-REM (NREM) sleep compared to wakefulness, largely driven by the appearance of 'OFF' periods (silence)...Similar to active wakefulness, neural firing rates in REM are generally higher than in NREM and often match awake levels."

This is an indication of only a small difference in neuron firing rates between sleep and wakefulness. We are told that during one type of sleep (NREM sleep) firing rates decrease by 30%, but that is a gradual decrease occurring over an hour or two. So if you had a gradual decrease to a 30% lower firing rate during  NREM sleep, this would be something like an average neuron firing rate of 15% less than during wakefulness. And during the other type of sleep (REM sleep)  neurons seem to fire about as often as when you are awake. 

Diving into this AI overview by doing more scrolling or clicking, I find the story changes. Later in the same AI overview I am told that "average neuron firing rates vary significantly between sleep and wakefulness, typically characterized by a 10%–20% decrease during sleep, though these shifts depend heavily on the specific brain region and sleep stage." 

The type of graph that gives you the most information when analyzing brain waves is a type of graph called the EEG multitaper spectrogram. Someone unfamiliar with it may have to take a minute or two studying how the graph works before he can understand it. The graph can show up to 10 hours of brain activity. Each column of pixels shows the activity for a particular short time unit such as a minute or a few minutes. The higher rows on the graph represent the higher-frequency brain waves. A red color represents a high intensity; a yellow or green color represents a medium intensity; and a blue color represents a lower intensity. 

We are sometimes shown versions of this graph which will suggest that lower-frequency brain waves are much more common during sleep. However, in Figure 7 of the paper here, we are shown  multitaper EEG spectrograms that are called representative of sleep, and those diagrams seem to depict theta, alpha and beta waves occurring almost as frequently as delta waves. 

The paper "Sleep Neurophysiological Dynamics
Through the Lens of Multitaper Spectral
Analysis" by Prerau et. al. seems like the best paper I can find giving data comparing brain waves during sleep and brain waves when  awake. The paper has many examples of EEG multitaper spectrograms plotting the differences between brain waves during sleep and brain wave when awake.  The graphs show no strong evidence of greater electrical activity in the brain while you are awake. 

Here is figure 1 of the paper, showing electrical activity in a brain from midnight to 10:00 AM. Roughly the first hour and the last hour are wakefulness. 

brain wave differences between sleep and awake

I can give some tips on interpreting this graph:

(1) The graph plots brain waves that occurred over about 10 hours that included 8 hours of sleep and 2 hours of being awake. 
(2) The left edge of the graph plots brain waves occurring during an hour of being awake, before the 8 hours of sleep occurred. 
(3) The right edge of the graph plots brain waves occurring during an hour of being awake, after the 8 hours of sleep occurred. 
(4) The middle 80% of the graph plots brain waves occurring during sleep. 
(5) With this type of graph, the redder the color and the higher up on the graph the colors occurs, the greater the indication that more electrical activity was occurring. For any given frequency, blue represents the lowest power; green represents a higher power than blue; yellow represents a higher power than green; and red represents a higher power than yellow. 

So what do we see in the graph above? Overall, there is little difference between the amount of electrical activity occurring during sleep and while being awake. According to the graph, while awake there is a slightly greater power in the higher frequency band (about 15 Hz), because the top left corner and the top right corner are a little more green than blue.  But according to the same graph while the person is awake there is slightly less power in the lower frequency band (about 7 Hz), because in the bottom left and the bottom right of the graph we see more green and yellow than red. Overall, we seem to see no evidence of much greater electrical activity in the brain when a person is awake, compared to when he is asleep. 

What we see here is evidence suggesting that brains are not much more electrically active when you are awake as opposed to when you are asleep. This isn't what we would expect under the dogma that the brain is the source of the mind. 

Another interesting comparison to make is to compare brain waves during wakefulness and brain waves during anesthesia. The average person might think that the firing rate of neurons slows down greatly during the unconsciousness produced by anesthesia. That is not true, however. 

The diagram below is from the paper " Electroencephalographic dynamics of etomidate‐induced loss of consciousness" that you can read here. At the bottom we see an EEG multitaper spectrogram showing brain waves during both a conscious awake state and unconsciousness produced by anesthesia. The first third of the bottom visual shows the awake and conscious state. According to the diagram, loss of consciousness (LOC) occurs at around the 300 second mark, about the middle of the colored visual. 

brain waves awake state versus anesthesia

There is no reduction in brain activity or brain firing rates documented by the diagram. In fact, the caption of the paper says, "Compared with those during the awake period, the powers of the slow wave (< 1.0 Hz), delta wave (1.0–4.0 Hz), theta wave (4.0–8.0 Hz), and alpha wave (8.0–13.0 Hz) during the etomidate-induced LOC [loss of consciousness] were significantly increased (C: 0–22.97 Hz, 27.28–40.00 Hz; p < 0.001, two-group test for spectra)." Notice that at the point marked LOC in the figure (which stands for Loss of Consciousness), we see no real change in the brain activity. Injecting the anesthetic produces a fairly small change, but at the time when the consciousness is lost, brain activity does not change. 

This is just as we would expect under the idea that your brain is not the source of your mind. 

Monday, February 16, 2026

When Neuroscientists Say "Encode," Suspect It's a Load (of BS)

Neuroscientists have no credible story to tell of how a brain could learn anything. Nothing in a brain has the slightest resemblance to a device for storing or retrieving memories or learned information. Humans create various types of objects that store information and allow the retrieval of such information, such as:

(1) a notepad and a pencil;

(2) an old-fashioned cassette tape recorder;

(3) a computer with a hard drive;

(4) printed books; 

(5) a smartphone or digital pad device capable of storing keystrokes. 

So we know the types of things that an object needs to have in order for it be capable of doing a physical long-term storage of learned information and also be capable of rapidly retrieving information. These include things like this (a particular system does not necessarily need all of these things). 

(1) The use of some type of system of encoding whereby learned information can be translated into tokens that can be written on a surface. 

(2) Some type of component capable of writing such tokens to some kind of storage surface (for example, a pencil or the spray unit of an inkjet printer or the read-write head of a hard drive). 

(3) Some surface capable of permanently storing information written to it (for example, paper or the magnetic surface in a hard drive). 

(4) Some material arrangement allowing a sequential retrieval of learned information (for example, the binding of a notebook and the lines on its pages which facilitate sequential retrieval of the stored information, or the physical arrangement in a hard drive that allows data to be retrieved sequentially). 

(5) The use of addresses and indexes that allow an instantaneous retrieval of information. 

(6) Some type of device for retrieving information stored in an encoded format, and converting it to an intelligible form (for example, some computer technology capable of reading magnetic bits, and converting that to readable characters shown on a screen).  

(7) Conversion tables or conversion protocols such as the ASCII code, which constitute a standard method for converting letters into numbers.

(8) Computer software subroutines or functions capable of doing things such as converting text into ASCII decimal numbers, and then converting such decimal numbers into a sequence of binary digits. 

The brain has nothing like any of these things.  So neuroscientists have no credible story to tell of how a brain could learn anything or recall anything. But this does not stop neuroscientists from engaging in BS bluffing, and trying to make it look like they have a little bit of understanding of something they have no understanding of at all. 

The latest example of such BS bluffing is a news article on the often-erring MedicalXPress site, where we often find the most unfounded clickbait headlines claiming grand results not corresponding to anything actually done. It's a very misleading headline of "How the brain learns and applies rules: Sequential neuronal dynamics in the prefrontal cortex." We hear a quote by a scientist who makes a bunch of unfounded claims not matching anything established by his research paper. 

Unfounded boast of scientist

The scientist's paper is a poorly-designed piece of low-quality research entitled "The medial prefrontal cortex encodes procedural rules as sequential neuronal activity dynamics." It's a study involving mice. The first question you should always ask when examining a study like this is: how large were the study group sizes (in other words, how many mice were used for each of the study groups)?  Normally it is easy to find that. You can search in the scientific paper for the phrases "n=" or "n =" which will usually tell you how many mice were used. Or, you can search for the word "mice," and you will typically find a nice clear phrase such as "10 mice," which will tell you how many mice were used for a particular part of an experiment. 

Violating rules of good scientific procedure, the paper fails to ever tell us the number of mice that were used. We have in the paper some 70+ uses of the word "mice," none of which has a number mentioning how many mice were used. Doing the search for the phrases  "n=" or "n =" does not reveal how many mice were used. 

We can rather safely assume that the number of mice used in each study group was some ridiculously too-small number such as only 6 mice per study group. When neuroscientists use halfway-decent study group sizes, they almost always will mention the number of mice used. When neuroscientists fail to use halfway-decent study group sizes, and use ridiculously inadequate study group sizes, they may be too ashamed to state how small was the number of mice they used. 

Going to great efforts which no one should have to go to to get information that the researchers were probably too ashamed to plainly state, information that any good scientific paper should simply plainly state, you can find a statement that allows you to deduce with high likelihood how many mice were used. Going to great labors by wading through the senselessly convoluted mathematics that clutters up this paper, you can find the statement here: " Decoding was conducted for each mouse (mouse IDs 1 to 6) and on specific days (days 1, 2, and 6)."  So it seems only six mice were used. 

The study is therefore a bad example of very low-quality research. No study like this should be taken seriously unless it used at least 15 or 20 subjects per study group. The study also makes no mention of any control subjects, and no mention of any blinding protocol. 

The paper is guilty of ridiculous analytic techniques. We have a long gobbledygook discussion of arbitrary convoluted "maze within a maze within a maze" mathematics that were probably  invented after gathering data, so that some claim could be made that some evidence of encoding of learning had been found. The screen shot below shows only a very small fraction of the murky "down the rabbit hole" labyrinthine rigmarole that was going on:

 An interesting fact is that if you are allowed to engage in unbridled speculation of very high complexity after gathering data, and if you have only a very small study group size, then almost any data can be claimed as evidence of secret encoding. For example:

  • Let us suppose you have data on the exact random locations of the facial pimples of six teenagers with a bad case of acne.
  • Let us suppose you are trying to support some claim that these pimples are encodings of some data related to the girls (maybe encoding of their names or their brother's names or the names of their cats or any number of possible things). 
  • Let us suppose that you are allowed to speculate as much as you want about encoding methods, coming up with any cockamamie scheme of encoding you can imagine, using mathematics as complicated as you wish.

Then, given sufficient labors, and sufficient iterations, you will be able to come up with some speculation of a scheme of encoding that seems to allow you to match up the random pimples on the girl's face with  some type of data item you have chosen.  An important point is that being pure nonsense, the superficial evidence you have provided for this "scheme of encoding" will break down when a much larger data set is used.  So it might be some deal where you have some weak, superficial evidence for some type of "system of encoding" using a data set of only six girls with pimples; but things will break down and your claimed evidence will dissolve when you use a larger data set such as 12 girls with pimples or 20 girls with pimples. 

This is why people producing these kind of BS studies like to use very small study group sizes (such as only 6 mice). The smaller the study group size, the easier it is to produce false alarms, and the easier it is to create "see whatever you hope to see" pareidolia. 

neuroscientist on pedestal
The building blocks of his pedestal

An almost equally bad study claiming something about neural codes is the study "Specialized structure of neural population codes in parietal cortex outputs." It's a study groundlessly claiming evidence that  "Cortical neurons comprising an output pathway form a population code with a unique correlation structure that enhances population-level information to guide accurate behavior." The claim is groundless because the study only used ten mice. And we mostly cannot tell how large were the study group sizes used for each particular part of the  study, because the paper often refers vaguely to "mice," without telling us how many mice were used for a particular part of the experiment. We are told that were "data exclusions" under a rule of "we used mice with greater than 70% behavioral performance." So we cannot tell whether the study group sizes in some cases were much smaller than 10 mice. A "Reporting Summary" checklist at the end of the paper has a checkmark next to the box "The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement." But given the situation described above, the box should not have been checked. 

We have the same type of "maze within a maze within a maze" mathematics used to try to whip up some evidence of a secret code that can only be tortured out of the data. The screen shot below shows a bit of the math gobbledygook:


A look at the programming code suggests a witch's brew of poorly documented code (for example, the files here and here), running many a strange and arbitrary doubly-nested loop to produce some obscure manipulations of the original data.  It all smells very much like a "keep torturing the data until it confesses" affair. 

keep torturing the data until it confesses

The study confesses that "No statistical method was used to predetermine the sample size," a confession also made by the previously discussed study ("The medial prefrontal cortex encodes procedural rules as sequential neuronal activity dynamics"). 

None of the papers I have discussed in this post provide any robust evidence for a discovery of codes used in a brain to transmit or store information. 

Below is a depiction of the system of representations used in the genetic code, by which particular triple combinations of DNA base pairs represent particular amino acids used to make proteins. This is the only scheme of representation that has ever been discovered in the human brain, and it is merely a system for representing low-level chemical information. The evidence for the real existence of this code is rock-solid. Now and then there will appear boasts by neuroscientists of discovering some other system of representation in the human brain, but no such boasts have been well-founded or well-replicated, and none of them hold up well to critical scrutiny.