Wednesday, February 25, 2026

The "Speed Bump" Nerve Signal Bottlenecks That Make Up 90% of Your Brain Tissue

 Scientists have long advanced the claim that the human brain is the storage place for memories and the source of human thinking. But such claims are speech customs of scientists rather than things they have proven. There are numerous reasons for doubting such claims. One big reason is that the proteins in synapses have an average lifetime of only a few weeks, which is only a thousandth of the length of time (50 years or more) that humans can store memories. Another reason is that neurons and synapses are way too noisy (and synapses too unreliable signal transmitters) to explain very accurate human memory recall, such as when a Hamlet actor flawlessly recites 1476 lines. Another general reason can be stated as follows: the human brain is too slow to account for very fast thinking and very fast memory retrieval.

Consider the question of memory retrieval. Given a prompt such as a person's name or a very short description of a person, topic or event, humans can accurately retrieve detailed information about such a topic in one or two seconds. We see this ability constantly displayed on the long-running television series Jeopardy. On that show, contestants will be given a short prompt such as “This opera by Rossini had a disastrous premier,” and within a second after hearing that, a contestant may click a buzzer and then a second later give an answer mentioning The Barber of Seville.  Similarly, you can play with a well-educated person a game you can call “Who Was I?” You just pick random names of actual people from the arts or history, and require the person to identify the person within about two seconds. Very frequently a person will succeed. We can imagine a session of such a game, occurring in only ten seconds:

John: Marconi.
Mary: Invented the radio.
John: Magellan.
Mary: First to sail around the globe.
John: Peter Falk.
Mary: A TV actor.

We can also imagine a visual version of this game, in which you identify random pictures of any of 1000 famous people. The answers would often be just as quick.

The question is: how could a brain possibly achieve retrieval and recognition so quickly? Let us suppose that the information about some person is stored in some particular group of neurons somewhere in the brain. Finding that exact tiny storage location would be like finding a needle in a haystack, or like finding just the right index card in a swimming pool full of index cards. It would also be like opening the door of some vast library with a million volumes and instantly finding the exact volume you were looking for.

There are certain design features that a system can have that will allow for very rapid retrieval of information. One of these features is an indexing system. An indexing system requires a position notation system, in which the exact position of some piece of information can be recorded. An ordinary textbook has both of these things. The position notation system is the page numbering system. The indexing system is the index at the back of the book. But the brain has neither of these features. There is nothing in the brain like a position notation system by which the exact position of some tiny group of neurons can be identified. The brain has no neuron numbers, and a brain has no coordinate system similar to street names in a city or Cartesian coordinates in a grid. Lacking any such position notation system, the brain has no indexing system (something that requires a position notation system).

So how is it that humans are able to recall things instantly? It seems that the brain has nothing like the speed features that would make such a thing possible. You can't get around such a difficulty by claiming that each memory is stored everywhere in the brain. There would be two versions of such an idea. The first would be that each memory is entirely stored in every little spot of the brain. That makes no more sense than the idea of a library in which each page contains the information in every page of every book. The second version of the idea would be that each memory is broken up and scattered across the brain. But such an idea actually worsens the problem of explaining memory retrieval, as it would only be harder to retrieve a memory if it is scattered all over your brain rather than in a single little spot of your brain.

We also cannot get around this navigation problem by imagining that when you are asked a question, your brain scans all of its stored information. That doesn't correspond to what happens in our minds. For example, if someone asks me, "Who was Teddy Roosevelt," my mind goes instantly to my memories of Teddy Roosevelt, and I don't experience little flashes of knowledge about countless other people, as if my brain were scanning all of its memories.  

When we consider the issue of decoding encoded information, we have an additional strong reason for thinking that the brain is way too slow to account for instantaneous recall of learned information.  In order for knowledge to be stored in a brain, it would have to be encoded or translated into some type of neural state. Then, when the memory is recalled, this information would have to be decoded: it would have to be translated from some stored neural state into a thought held in the mind. This requirement is the most gigantic difficulty for any claim that brains store memories. Although they typically maintain that memories are encoded and decoded in the brain, no neuroscientist has ever specified a detailed theory of how such encoding and decoding could work. Besides the huge difficulty that such a system of encoding and decoding would require a kind of "miracle of design" we would never expect for a brain to ever have naturally acquired (something a million times more complicated than the genetic code), there is the difficulty that the decoding would take quite a bit of time, a length of time greater than the time it takes to recall something. 

So suppose I have some memory of who George Patton was, stored in my brain as some kind of synapse or neural states, after that information had somehow been translated into synapse or neural states using some encoding scheme.  Then when someone asks, "Who was George Patton?" I would have to not only find this stored memory in my brain (like finding a needle in a haystack), but also translate these synapse or neural states back into an idea, so I could instantly answer, "The general in charge of the Third Army in World War II."  The time required for the decoding of the stored information would be an additional reason why instantaneous recall could never be happening if you were reading information stored in your brain.  The decoding of neurally stored memories would presumably require protein synthesis, but the synthesis of proteins requires minutes of time. 

There is another reason for doubting that the brain is fast enough to account for human mental activity. The reason is that the transmission of signals in a brain is way, way too slow to account for the very rapid speed of human thought and human memory retrieval.

Information travels about in a modern computer at a speed thousands of time faster than nerve signals travel in the human brain. If you type in "speed of brain signals" into the Google search engine, you will see in large letters the number 286 miles per hour, which is a speed of 128 meters per second. This is one of many examples of dubious information which sometimes pops up in a large font at the top of the Google search results. The particular number in question is an estimate made by an anonymous person who quotes no sources, and one who merely claims that brain signals "can" travel at such a speed, not that such a speed is the average speed of brain signals. There is a huge difference between the average speed at which some distance will be traveled and the maximum speed that part of that distance can be traveled (for example, while you may briefly drive at 40 miles per hour while traveling through Los Angeles, your average speed will be much, much less because of traffic lights). 

A more common figure you will often see quoted is that nerve signals can travel in the human brain at a rate of about 100 meters per second. But that is the maximum speed at which such a nerve signal can travel, when a nerve signal is traveling across what is called a myelinated axon. Below we see a diagram of a neuron. The axons are the tube-like parts in the diagram below. The depicted axon is a myelinated axon (the faster type); but a large fraction of axons are unmyelinated (the slower type). 


neuron

The less sophisticated diagram below makes it clear that axons make up only part of the length that brain signals must travel.

neurons
Below is a depiction of these components by Google's Gemini AI:

neurons, axons, dendrites and synapses

There are two types of axons: myelinated axons and non-myelinated axons (myelinated axons having a sheath-like covering shown in blue in the diagram above). According to this article, non-myelinated axons transmit nerve signals at a slower speed of only .5-2 meters per second (roughly one meter per second). Near the end of this article is a table of measured speed of nerve signals traveling across axons in different animals; and in that table we see a variety of speeds varying between .3 meters per second (only about a foot per second) and about 100 meters per second. 

But from the mere fact that nerve signals can travel across myelinated axons at a maximum speed of about 100 meters per second, we are not at all entitled to conclude that nerve signals typically travel from one region of the brain to another at 100 meters per second. For one thing, only about half of the axons in the human cortex are myelinated, and the transmission speed of the unmyelinated axons is only about a meter per second. Moreover,  nerve signals must also travel across dendrites and synapses, which we can see in the diagrams above. It turns out that nerve signal transmission is much slower across dendrites and synapses than across axons. To give an analogy, the axons are like a road on which you can travel fast, and the dendrites and synapses are like traffic lights or stop signs that slow down your speed.

According to neuroscientist Nikolaos C Aggelopoulosthere is an estimate of 0.5 meters per second for the speed of nerve transmission across dendrites (see here for a similar estimate). That is a speed 200 times slower than the nerve transmission speed commonly quoted for myelinated axons. Such a speed bump seems more important when we consider a quote by UCLA neurophysicist Mayank Mehta: "Dendrites make up more than 90 percent of neural tissue."  Given such a percentage, and such a conduction speed across dendrites, it would seem that the average transmission speed of a brain must be only a small fraction of the 100 meter-per-second transmission in axons. 

A scientific paper from 2025 documents precise measurements of the speed of signal transmission across both axons and dendrites, in both humans and rats. The paper is entitled "Accelerated signal propagation speed in human neocortical dendrites" and can be read here

The paper gives us a speed of nerve signals (which are called action potentials) in the axons which are the fastest parts of a brain. Using the term AP to mean an action potential or nerve signal, the paper states, "We found no significant difference in the propagation speed of APs in the axons of rats and humans (rat: n=8, 0.848±0.291 m/s vs. human: n=9, 0.851±0.387 m/s, two-sample t-test: p=0.282, Figure 2F)." In that quote the paper gives an axon transmission speed of about .8 meter per second,  which is more than 100 times slower than the "100 meters per second" figure commonly cited in popular literature as the speed of brain signals. 

For the speed of signal transmission across dendrites (which make up 90%  or more of brain tissue), the paper gives us two numbers, one for what it calls "forward propagating sEPSP speed" and another it calls "back propagating AP speed." We are told that these speeds were measured:

  • "The AP propagation speed was calculated for each cell from the time difference between the somatic and dendritic APs divided by the distance between the two points. We found that the propagation speed was, on average, ~1.47 fold faster in human (rat: 0.233±0.095 m/s vs. human: 0.344±0.139 m/s, Mann-Whitney test: p=6.369 × 10–6, Figure 2F, Figure 2—figure supplement 1B)". This is a speed of about one third of a meter per second, roughly ten centimeters per second, the same as about one foot per second. The "m/s" in the quote above means meter per second. 
  • " We found that sEPSP propagation speed was, on average, ~1.26 fold faster in human (rat: 0.074±0.018 m/s vs. human: 0.093±0.025 m/s, two-sample t-test: p=0.004; Figure 2D, Figure 2—figure supplement 1D)." This is a speed of about one tenth of a meter per second, roughly ten centimeters per second, or about four inches per second. The "m/s" in the quote above means meter per second. 
In Table 2 of the paper we have five different rows marked with names such as Human1, Human2, Human3, Human4 and Human5. The last column in the table is marked "Velocity." All of the velocities listed are less than a tenth of a meter per second. The average of the five velocities is 0.085 meter second. 

Dendrites, it would appear, are sluggish bottlenecks or speed bumps (by which I mean a physical feature that slows something down).  And since it is often claimed that dendrites make up 90% or more of brain tissue, what does this tell us about whether brains are fast enough to account for instant human recall?  It tells us that brains are way too  slow to explain humans who can think at blazing fast speeds, and give the right answers instantly when asked rarely asked questions. 

Besides the slow speed of dendrites, a very important additional speed bump or bottleneck is that of synaptic delay. Synaptic delay is the fact that every time a nerve signal passes the synaptic gap of a chemical synapse, there is a delay of about .5 millisecond. In a brain with an estimated 100 trillion synapses, synaptic delay would be an enormous slowing factor. This is because nerve signals would have to travel across very many synapses, resulting in a cumulative delay that might add up to quite a few seconds or many seconds. 

cumulative synaptic delay

The diagram below shows fast, slow and no-so-fast parts of the brain. The snail symbols indicate slow parts, parts that would slow down nerve signals. The thin rabbit represents a relatively fast part. The fat rabbit represents a not-so-fast part.  Here "slow" and "fast" refers to the speed at which signals could transmit through such parts, which do not themselves move. 

fast and slow brain parts

A brain is something packed with a gazillion speed bumps or bottlenecks: the speed bumps or bottlenecks of dendrites, and the speed bumps or bottlenecks of synapses, with their delays at every synaptic junction. The brain therefore screams to us in a loud voice: "I'm way too slow to explain your instant recall." 

Postscript: 

There are two factors we can consider that will help clarify why a dendrite signal transmission speed of only about a third or a tenth of a meter is way too slow to account for instant human recall  The first factor is the total area of the human cortex. The brain tissue in the human cortex is highly folded. This means that the total area of the human cortex is surprising large, much larger than the surface area needed to make a hat. 

If you use a Google search phrase such as "total area of the human cortex," you will be told that such a surface area is between 1.5 square feet and 2.5 square feet, "roughly the size of a standard pizza." A standard pizza is about 14 inches in diameter, about the distance from a man's elbow to a ring on his finger. 

When you type in "compute the average distance between two points in a circular area" in a modern browser such as Chrome, you get an AI overview answer telling you that for a circular area with a radius of r, the average distance between two points is about .9r Using the formula above, we can (ignoring the complication of tortuosity) crudely estimate that the average distance between two random points in the cortex is about .9 times 7 inches, which is about 6 inches. 

But such a number would be a significant underestimation of the average distance that a brain signal would have to travel to go from one point to another in the cortex. The reason why it would be an underestimation is a reason called tortuosity. The word "tortuosity" refers to the fact that neural pathways are not straight lines but twisty, wiggly, squiggly lines. And it takes longer for signals to travel along such twisting lines than it does for a signal to travel along a straight line. 

The Google Gemini diagram below illustrates quite well the concept of the tortuosity of brain pathways. 


brain signal tortuosity

How much does this tortuosity factor affect the length of the pathways that brain signals must travel? You can get an estimate by typing "numerical estimate of the tortuosity of brain pathways" into Google Chrome.  This produces the answer that this tortuosity is estimated to be about 1.6.  The scientific paper here ("Extracellular space structure revealed by diffusion analysis") says that diffusion measurements show that 1.6 is the tortuosity of brain signal pathways. 

Because of the tortuosity of brain signal pathways, it seems that we should multiply the previous estimate of six inches by a factor of about 1.6.  Doing that leaves you with an estimate of about 10 inches for the average distance that a brain signal would have to travel to go from one random point to another random point in the cortex.

Such a distance may seem small, but when you are dealing with a nerve signal transmission speed of about one tenth of a meter per second (about 4 inches per second), such a distance means a delay of two seconds or more. The problem is that human recall can very often occur instantly. You can ask someone his address or telephone number or the names of his family members, and he will be able to answer instantly, without this requiring a delay of two seconds. And if you ask me, "What's the New York baseball team?" I will answer instantly. There must be a thousand questions you could answer instantly, as soon as someone finished asking them. 

Friday, February 20, 2026

Contrary to "Brains Make Minds" Claims, Brains Are Not Much More Electrically Active When You Are Awake

Those who claim that the brain makes the mind keep trying to push the silly idea that you are just a bunch of neural signals passing around inside your head. The scientists who make such claims typically are members of a belief community, a kind of sect of the ivory towers. When we hear such claims we are observing the speech customs of such a community. The members of belief communities often keep repeating the same old claims, which often are not justified by any robust evidence. 

dumb professor

It is interesting to consider this question: what would we expect if minds are produced by the mere firing of neurons? Three predictions would seem to follow from such an idea:

(1) If minds are produced by firing neurons, we would expect that neurons would fire much more frequently during conscious awareness than during unconscious sleep.
(2) If minds are produced by firing neurons, we would expect that neurons would fire much more frequently during heavy mental activity such as deep concentration, heavy calculation or rapid memory recall, than during a passive awake condition involving a mental resting state.
(3) If minds are produced by firing neurons, we would expect that when neurons fire most rapidly, that would produce the highest state of consciousness or mental activity. 

None of these predictions turns out to be true. To investigate this matter, you should ignore the type of visual shown below, a misleading visual that open appears in articles about the brain. The reality is that all of these types of brain waves occur during each of the listed states. 

misleading brain wave visuals
A misleading diagram recurring in neuroscience articles

Figure 1 of the paper here ("Firing rates of hippocampal neurons are preserved during subsequent sleep episodes and modified by novel awake experience") shows two scatter plots showing neural firing rates in the hippocampus of rats. One scatter plot is marked "Awake" and the other scatter plot is marked "Sleep." The two scatter plots look pretty identical. They both show rates of neuron firing varying from about once every ten seconds to ten times per second. 

neural firing rates awake and sleep

Judging from such graphs, neurons do not seem to fire more often in rats when they are awake. Some people claim that neurons fire less frequently during sleep, but they typically fail to give us specific figures as to how much less frequently they fire. 

Referring to readings from an EEG (a device that reads brain waves), we read this on an expert answers site:

"In REM sleep, the EEG is remarkably similar to that of the awake state (Purves et al., 2001). Although the EEG represents the synchronized activity of many neurons in the cortex, it does give us a clue whether they are firing faster or not. Wakefulness is mainly dominated by beta and gamma waves (source: Scholarpedia), i.e. 12 - 100 Hz. REM sleep is characterized by low-amplitude mixed-frequency brain waves, quite similar to those experienced during the waking state - theta waves, alpha waves and even the high frequency beta waves more typical of high-level active concentration and thinking, i.e. 4-30 Hz (table 1) (source: Sleep)."

When I ask Google "how much do average neuron neuron firing rates vary between sleep and wakefulness," I get the AI overview answer below:

"Average neuron firing rates decrease by approximately 30-40% during non-REM (NREM) sleep compared to wakefulness, largely driven by the appearance of 'OFF' periods (silence)...Similar to active wakefulness, neural firing rates in REM are generally higher than in NREM and often match awake levels."

This is an indication of only a small difference in neuron firing rates between sleep and wakefulness. We are told that during one type of sleep (NREM sleep) firing rates decrease by 30%, but that is a gradual decrease occurring over an hour or two. So if you had a gradual decrease to a 30% lower firing rate during  NREM sleep, this would be something like an average neuron firing rate of 15% less than during wakefulness. And during the other type of sleep (REM sleep)  neurons seem to fire about as often as when you are awake. 

Diving into this AI overview by doing more scrolling or clicking, I find the story changes. Later in the same AI overview I am told that "average neuron firing rates vary significantly between sleep and wakefulness, typically characterized by a 10%–20% decrease during sleep, though these shifts depend heavily on the specific brain region and sleep stage." 

The type of graph that gives you the most information when analyzing brain waves is a type of graph called the EEG multitaper spectrogram. Someone unfamiliar with it may have to take a minute or two studying how the graph works before he can understand it. The graph can show up to 10 hours of brain activity. Each column of pixels shows the activity for a particular short time unit such as a minute or a few minutes. The higher rows on the graph represent the higher-frequency brain waves. A red color represents a high intensity; a yellow or green color represents a medium intensity; and a blue color represents a lower intensity. 

We are sometimes shown versions of this graph which will suggest that lower-frequency brain waves are much more common during sleep. However, in Figure 7 of the paper here, we are shown  multitaper EEG spectrograms that are called representative of sleep, and those diagrams seem to depict theta, alpha and beta waves occurring almost as frequently as delta waves. 

The paper "Sleep Neurophysiological Dynamics
Through the Lens of Multitaper Spectral
Analysis" by Prerau et. al. seems like the best paper I can find giving data comparing brain waves during sleep and brain waves when  awake. The paper has many examples of EEG multitaper spectrograms plotting the differences between brain waves during sleep and brain wave when awake.  The graphs show no strong evidence of greater electrical activity in the brain while you are awake. 

Here is figure 1 of the paper, showing electrical activity in a brain from midnight to 10:00 AM. Roughly the first hour and the last hour are wakefulness. 

brain wave differences between sleep and awake

I can give some tips on interpreting this graph:

(1) The graph plots brain waves that occurred over about 10 hours that included 8 hours of sleep and 2 hours of being awake. 
(2) The left edge of the graph plots brain waves occurring during an hour of being awake, before the 8 hours of sleep occurred. 
(3) The right edge of the graph plots brain waves occurring during an hour of being awake, after the 8 hours of sleep occurred. 
(4) The middle 80% of the graph plots brain waves occurring during sleep. 
(5) With this type of graph, the redder the color and the higher up on the graph the colors occurs, the greater the indication that more electrical activity was occurring. For any given frequency, blue represents the lowest power; green represents a higher power than blue; yellow represents a higher power than green; and red represents a higher power than yellow. 

So what do we see in the graph above? Overall, there is little difference between the amount of electrical activity occurring during sleep and while being awake. According to the graph, while awake there is a slightly greater power in the higher frequency band (about 15 Hz), because the top left corner and the top right corner are a little more green than blue.  But according to the same graph while the person is awake there is slightly less power in the lower frequency band (about 7 Hz), because in the bottom left and the bottom right of the graph we see more green and yellow than red. Overall, we seem to see no evidence of much greater electrical activity in the brain when a person is awake, compared to when he is asleep. 

What we see here is evidence suggesting that brains are not much more electrically active when you are awake as opposed to when you are asleep. This isn't what we would expect under the dogma that the brain is the source of the mind. 

Another interesting comparison to make is to compare brain waves during wakefulness and brain waves during anesthesia. The average person might think that the firing rate of neurons slows down greatly during the unconsciousness produced by anesthesia. That is not true, however. 

The diagram below is from the paper " Electroencephalographic dynamics of etomidate‐induced loss of consciousness" that you can read here. At the bottom we see an EEG multitaper spectrogram showing brain waves during both a conscious awake state and unconsciousness produced by anesthesia. The first third of the bottom visual shows the awake and conscious state. According to the diagram, loss of consciousness (LOC) occurs at around the 300 second mark, about the middle of the colored visual. 

brain waves awake state versus anesthesia

There is no reduction in brain activity or brain firing rates documented by the diagram. In fact, the caption of the paper says, "Compared with those during the awake period, the powers of the slow wave (< 1.0 Hz), delta wave (1.0–4.0 Hz), theta wave (4.0–8.0 Hz), and alpha wave (8.0–13.0 Hz) during the etomidate-induced LOC [loss of consciousness] were significantly increased (C: 0–22.97 Hz, 27.28–40.00 Hz; p < 0.001, two-group test for spectra)." Notice that at the point marked LOC in the figure (which stands for Loss of Consciousness), we see no real change in the brain activity. Injecting the anesthetic produces a fairly small change, but at the time when the consciousness is lost, brain activity does not change. 

This is just as we would expect under the idea that your brain is not the source of your mind. 

Monday, February 16, 2026

When Neuroscientists Say "Encode," Suspect It's a Load (of BS)

Neuroscientists have no credible story to tell of how a brain could learn anything. Nothing in a brain has the slightest resemblance to a device for storing or retrieving memories or learned information. Humans create various types of objects that store information and allow the retrieval of such information, such as:

(1) a notepad and a pencil;

(2) an old-fashioned cassette tape recorder;

(3) a computer with a hard drive;

(4) printed books; 

(5) a smartphone or digital pad device capable of storing keystrokes. 

So we know the types of things that an object needs to have in order for it be capable of doing a physical long-term storage of learned information and also be capable of rapidly retrieving information. These include things like this (a particular system does not necessarily need all of these things). 

(1) The use of some type of system of encoding whereby learned information can be translated into tokens that can be written on a surface. 

(2) Some type of component capable of writing such tokens to some kind of storage surface (for example, a pencil or the spray unit of an inkjet printer or the read-write head of a hard drive). 

(3) Some surface capable of permanently storing information written to it (for example, paper or the magnetic surface in a hard drive). 

(4) Some material arrangement allowing a sequential retrieval of learned information (for example, the binding of a notebook and the lines on its pages which facilitate sequential retrieval of the stored information, or the physical arrangement in a hard drive that allows data to be retrieved sequentially). 

(5) The use of addresses and indexes that allow an instantaneous retrieval of information. 

(6) Some type of device for retrieving information stored in an encoded format, and converting it to an intelligible form (for example, some computer technology capable of reading magnetic bits, and converting that to readable characters shown on a screen).  

(7) Conversion tables or conversion protocols such as the ASCII code, which constitute a standard method for converting letters into numbers.

(8) Computer software subroutines or functions capable of doing things such as converting text into ASCII decimal numbers, and then converting such decimal numbers into a sequence of binary digits. 

The brain has nothing like any of these things.  So neuroscientists have no credible story to tell of how a brain could learn anything or recall anything. But this does not stop neuroscientists from engaging in BS bluffing, and trying to make it look like they have a little bit of understanding of something they have no understanding of at all. 

The latest example of such BS bluffing is a news article on the often-erring MedicalXPress site, where we often find the most unfounded clickbait headlines claiming grand results not corresponding to anything actually done. It's a very misleading headline of "How the brain learns and applies rules: Sequential neuronal dynamics in the prefrontal cortex." We hear a quote by a scientist who makes a bunch of unfounded claims not matching anything established by his research paper. 

Unfounded boast of scientist

The scientist's paper is a poorly-designed piece of low-quality research entitled "The medial prefrontal cortex encodes procedural rules as sequential neuronal activity dynamics." It's a study involving mice. The first question you should always ask when examining a study like this is: how large were the study group sizes (in other words, how many mice were used for each of the study groups)?  Normally it is easy to find that. You can search in the scientific paper for the phrases "n=" or "n =" which will usually tell you how many mice were used. Or, you can search for the word "mice," and you will typically find a nice clear phrase such as "10 mice," which will tell you how many mice were used for a particular part of an experiment. 

Violating rules of good scientific procedure, the paper fails to ever tell us the number of mice that were used. We have in the paper some 70+ uses of the word "mice," none of which has a number mentioning how many mice were used. Doing the search for the phrases  "n=" or "n =" does not reveal how many mice were used. 

We can rather safely assume that the number of mice used in each study group was some ridiculously too-small number such as only 6 mice per study group. When neuroscientists use halfway-decent study group sizes, they almost always will mention the number of mice used. When neuroscientists fail to use halfway-decent study group sizes, and use ridiculously inadequate study group sizes, they may be too ashamed to state how small was the number of mice they used. 

Going to great efforts which no one should have to go to to get information that the researchers were probably too ashamed to plainly state, information that any good scientific paper should simply plainly state, you can find a statement that allows you to deduce with high likelihood how many mice were used. Going to great labors by wading through the senselessly convoluted mathematics that clutters up this paper, you can find the statement here: " Decoding was conducted for each mouse (mouse IDs 1 to 6) and on specific days (days 1, 2, and 6)."  So it seems only six mice were used. 

The study is therefore a bad example of very low-quality research. No study like this should be taken seriously unless it used at least 15 or 20 subjects per study group. The study also makes no mention of any control subjects, and no mention of any blinding protocol. 

The paper is guilty of ridiculous analytic techniques. We have a long gobbledygook discussion of arbitrary convoluted "maze within a maze within a maze" mathematics that were probably  invented after gathering data, so that some claim could be made that some evidence of encoding of learning had been found. The screen shot below shows only a very small fraction of the murky "down the rabbit hole" labyrinthine rigmarole that was going on:

 An interesting fact is that if you are allowed to engage in unbridled speculation of very high complexity after gathering data, and if you have only a very small study group size, then almost any data can be claimed as evidence of secret encoding. For example:

  • Let us suppose you have data on the exact random locations of the facial pimples of six teenagers with a bad case of acne.
  • Let us suppose you are trying to support some claim that these pimples are encodings of some data related to the girls (maybe encoding of their names or their brother's names or the names of their cats or any number of possible things). 
  • Let us suppose that you are allowed to speculate as much as you want about encoding methods, coming up with any cockamamie scheme of encoding you can imagine, using mathematics as complicated as you wish.

Then, given sufficient labors, and sufficient iterations, you will be able to come up with some speculation of a scheme of encoding that seems to allow you to match up the random pimples on the girl's face with  some type of data item you have chosen.  An important point is that being pure nonsense, the superficial evidence you have provided for this "scheme of encoding" will break down when a much larger data set is used.  So it might be some deal where you have some weak, superficial evidence for some type of "system of encoding" using a data set of only six girls with pimples; but things will break down and your claimed evidence will dissolve when you use a larger data set such as 12 girls with pimples or 20 girls with pimples. 

This is why people producing these kind of BS studies like to use very small study group sizes (such as only 6 mice). The smaller the study group size, the easier it is to produce false alarms, and the easier it is to create "see whatever you hope to see" pareidolia. 

neuroscientist on pedestal
The building blocks of his pedestal

An almost equally bad study claiming something about neural codes is the study "Specialized structure of neural population codes in parietal cortex outputs." It's a study groundlessly claiming evidence that  "Cortical neurons comprising an output pathway form a population code with a unique correlation structure that enhances population-level information to guide accurate behavior." The claim is groundless because the study only used ten mice. And we mostly cannot tell how large were the study group sizes used for each particular part of the  study, because the paper often refers vaguely to "mice," without telling us how many mice were used for a particular part of the experiment. We are told that were "data exclusions" under a rule of "we used mice with greater than 70% behavioral performance." So we cannot tell whether the study group sizes in some cases were much smaller than 10 mice. A "Reporting Summary" checklist at the end of the paper has a checkmark next to the box "The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement." But given the situation described above, the box should not have been checked. 

We have the same type of "maze within a maze within a maze" mathematics used to try to whip up some evidence of a secret code that can only be tortured out of the data. The screen shot below shows a bit of the math gobbledygook:


A look at the programming code suggests a witch's brew of poorly documented code (for example, the files here and here), running many a strange and arbitrary doubly-nested loop to produce some obscure manipulations of the original data.  It all smells very much like a "keep torturing the data until it confesses" affair. 

keep torturing the data until it confesses

The study confesses that "No statistical method was used to predetermine the sample size," a confession also made by the previously discussed study ("The medial prefrontal cortex encodes procedural rules as sequential neuronal activity dynamics"). 

None of the papers I have discussed in this post provide any robust evidence for a discovery of codes used in a brain to transmit or store information. 

Below is a depiction of the system of representations used in the genetic code, by which particular triple combinations of DNA base pairs represent particular amino acids used to make proteins. This is the only scheme of representation that has ever been discovered in the human brain, and it is merely a system for representing low-level chemical information. The evidence for the real existence of this code is rock-solid. Now and then there will appear boasts by neuroscientists of discovering some other system of representation in the human brain, but no such boasts have been well-founded or well-replicated, and none of them hold up well to critical scrutiny. 


Thursday, February 12, 2026

Crude "Finger in the Sand" Diagrams of "Engrams" Suggest Vacuous Theorizing

 There's a year 2025 paper on the Cornell physics paper server, one entitled "Engram Memory Encoding and Retrieval: A Neurocomputational Perspective." The author attempts to persuade us that he understands something about engrams (alleged memories stored in brains), something for which there is no real evidence. What we have in the paper is misstatements, hand-waving, bluffing and boasting, adorned by about the most primitive diagrams anyone could give. The "finger in the sand" crudity of the diagrams suggests that there is no real underlying understanding of how a brain could store or retrieve memories. 

Before discussing how crude are the diagrams, let me list some of the bad misstatements and half truths in the paper:

  • The author states, "Despite substantial research into the biological basis of memory, the precise mechanisms by which experiences are encoded, stored, and retrieved in the brain remain incompletely understood." The truth is that scientists have no understanding at all of such a thing, and no robust evidence that any such mechanisms even exist. 
  • The author states, " A growing body of evidence supports the engram theory, which posits that sparse populations of neurons undergo lasting physical and biochemical changes to support long-term memory."  This is false. The claimed evidence for engrams is all junk-science research guilty of sins such as way-too-small study group sizes and unreliable measurement techniques such as judgments of claimed "freezing behavior." 
  • The author states, "These findings suggest that memory efficiency, capacity, and stability emerge from the interaction of plasticity and sparsity constraints." This is an example of vacuous hand-waving.
  • The author states, "Modern discoveries of 'silent engrams' — which exist as physical traces but cannot be retrieved by natural cues, yet can be artificially reactivated — directly align with Semon’s concept of 'primarily latent modifications.' " There has been no actual discovery of 'silent engrams' or any other type of engram. All claims to have made such a discovery are unfounded, and not supported by any well-designed studies with high statistical power. 
  • The author states, "Modern technological advancements have revolutionized the study of engrams, enabling researchers to investigate how specific memories translate into neuronal changes with unprecedented resolution (Luis & Ryan, 2022). These technologies include transgenic manipulation, optogenetics, chemogenetics, electrophysiology, and sophisticated behavioral techniques." The statement is untrue. Fancy technologies are used in studies looking for engrams, often as a kind of window-dressing to impress the easily impressed. But such studies have produced no robust evidence for any such thing as an engram. No one has ever found the slightest trace of any learned information in brain tissue by studying human brain tissue. Studies looking for evidence of engrams in animals have been a cesspool of junk science, and have been almost invariably guilty of very bad research practices such as way-too-small study group sizes, a lack of a blinding protocol, a lack of pre-registration, and the use of unreliable measurement techniques such as "freezing behavior" judgments. 
  • The author states, "Modern neuroscience, armed with advanced technologies like optogenetics and immediate early gene labeling, has provided compelling evidence for the existence and dynamic nature of engram neurons and their ensembles." To the contrary, no such evidence has ever been produced. Any papers claiming to have produced such evidence will not hold up to critical scrutiny. 
  • The author states, "Furthermore, the activity of engram neurons can be tracked in vivo during their maturation from encoding through consolidation using functional indicators like GCaMP (calcium indicators; Cupollilo et al., 2025). These experimental manipulations, particularly in the hippocampus, have demonstrated the necessity and sufficiency of engram cells for memory functions, enabling selective memory erasure, artificial recall, and even the creation of synthetic memories."  The first reference is one of many references the author makes to the paper "Early changes in the properties of CA3 engram cells explored with a novel viral tool" authored by Cupollilo and others, which is a very low-quality junk science paper using way-too-small study group sizes such as about 5 mice per study group, a paper guilty of defects such as failing to do any sample size calculation, and relying on unreliable "freezing behavior" judgments. The second sentence (beginning with "these experimental manipulations") is simply untrue, and none of the things claimed as "demonstrated" has actually been demonstrated. 
When the paper's author (Daniel Szelogowski) gives us a diagram regarding these claimed "engrams," we get a visual sign of the lack of any substantive theory underlying his claims. Below is a screen shot from the paper showing its Figure 1:

engram diagram

Notice the "finger in the sand" nature of the diagrams. The diagrams are like those a five-year-old child might draw, using crayons. When people understand things, they may produce very detailed diagrams showing the depth of their understanding. For example, do a Google image search for "genetic code" and you will get a very detailed diagram showing the exact scheme of representations used by DNA. But when people do not understand things, and they are merely feigning understanding, they may tend to produce very crude "finger in the sand" diagrams like those in the visual above. 

For example, imagine you had no understanding of how the Apollo 11 mission was able to leave our planet, land on the moon, and return to our planet. Rather than producing detailed diagrams showing things like the Saturn 5 rocket and the Lunar Excursion Module (LEM), you might produce "finger in the sand" type of diagrams like the ones below:


The Apollo 11 diagram above is as laughable a "finger in the sand" diagram as the diagrams in 
Szelogowski's paper. Neither Szelogowski nor any scientist has any real understanding of how a brain could ever encode and store any of the types of learned information that humans can remember. Neither Szelogowski nor any scientist has any real understanding of how a brain could ever instantly retrieve the correct information when a person hears a name or sees a face. 

When attempting to persuade us that they have some understanding of how memory could work in a brain, what those such as Szelogowski do is to mainly engage in vacuous jargon-adorned hand-waving.  Some vague wooly phrase such as "synapse strengthening" is used. Then some mentions are made of some type of actual chemistry observed in the brain, to make such vacuous hand-waving sound more substantive. There is no substance involved and no detail involved when Szelogowski states this:

"Synaptic changes primarily encode the specific content of a memory by modifying the strength of connections between neurons, while intrinsic and non-synaptic changes modulate the overall responsiveness and participation of individual neurons within the engram. This coordinated interplay ensures both the precise encoding of information and the dynamic integration of neurons into stable memory circuits."  

The claim above make no sense. Information is not encoded by some "strengthening" action. Humans are familiar with various ways in which information is encoded and stored, and none of these ways occur by strengthening.  When information is encoded and stored, what occurs is writing, according to some scheme of representation such as the English alphabet, the ASCII code, and so forth. 

For the neuroscientist, the problem is that nothing in the brain bears any resemblance to a some unit for writing information. So what do you do if you are a neuroscientist trying to suggest that brains write memories?  You appeal to "strengthening." and hope that people don't recognize how silly your language is. It's rather like a suitor who has no evidence that he is earning money, and who tries to impress a woman by bragging about how he is improving his muscles by weight training, while hoping that the woman somehow thinks that this is something like earning money. 

When people understand something and are asked to explain it, they tend to speak exactly in ways that show their understanding. Imagine you interview someone for a job as a computer programmer, and you ask the person, "How can I modify my web site so that it can store and remember data the users type in on a registration page?" If the job candidate is knowledgeable about this topic, he would tend to give a very exact and very detailed answer rather like this:

"Well, it depends on how much you want to spend, and how many people use your site.  If you don't have many users or don't want to spend much, you could use a simple pipe-delimited text file to store your data. Each row in such a file would give data on one user, and the pipe character would be used to separate the data fields such as name and email address. But finding a user in such a file requires a scan of the whole file which isn't efficient if you have many thousands of users. If you have many thousands of users, it might be better to use a relational database product such as MySQL. You could create a database, and then use the 'CREATE TABLE' command to create a new table with text storage fields such as UserName and Email. Once you had that table, you could have your web site add a new record for each new user, using the handy INSERT command available in SQL products such as MySQL. For the case of updating an existing user's data, you could use the UPDATE command available in SQL products such as MySQL. Products such as that take care of such details as converting from text strings to ASCII, and converting from ASCII to binary -- that's all encapsulated under the engine of such  relational database products. Your evocation of MySQL commands would take place in a handler function you would write that would respond to the press of a Submit button on your web site's form. Of course, some prefer never to get involved with SQL commands. If you're that type, there are various class libraries that will encapsulate all the SQL commands, so you don't have to remember any. Then you can just call the methods of some object that you have instantiated, and supply the data as arguments to a method of some class, a function that would take parameter inputs." 

But imagine you are interviewing a job candidate who does not know how a web page stores data. If you ask him how you can modify your web site to store user's data that they submit on a form, you might get some answer like this:

"Data processing is a very important function of a web site. Of course, when user's submit data, they want it to be saved not forgotten. Various components can be crafted that enable this functionality. It would require strengthening of the code that underlies your web site. It would require an encoding of information and a coordinated interplay between complex electronic components, as well as the participation of diverse units of functionality."

This job candidate apparently knows nothing about how a web site can store data a user types into a form. All he has given is some vacuous wooly phrases lacking in specifics. His answer sounds like the equally empty and vacuous lines I quote above from Szelogowski, who uses empty verbiage such as "coordinated interplay" and "modulate the overall responsiveness." Szelogowski sounds just as if he has no actual understanding of how a brain could store or retrieve a memory. And he's in the same boat as every neuroscientist, none of whom understand any such thing. 

One huge problem is that what Szelogowski is appealing to (synapse strengthening) is a slow process requiring hours or days. But that cannot explain human learning, which can occur instantly. If someone tells you that your mother or child has just died, you do not require hours or days to learn such a fact. You learn such a fact instantly. 

 Szelogowski's only mention of this issue is a feeble one. Appealing to some wild speculation, he says, "Furthermore, non-synaptic plasticity, such as the regulation of neural membrane properties, can operate on faster timescales, potentially enabling rapid initial information storage, complementing the slower, more enduring synaptic plasticity processes (Ferrand et al., 2025)." This is basically equivalent to a goofy statement such as, "I say memory storage occurs to synapse strengthening, but that isn't fast enough, so maybe there might be something else that is fast enough." 

Szelogowski's Figure 2 in the paper is just as vacuous a "finger in the sand" affair as his Figure 1. Below is his Figure 2:

silly engram diagram

This is not the kind of diagrams that people produce when they understand something.  A group of connected nodes as we see above is not even a sensible depiction of any such thing as the encoding of learned information. 

Just as unimpressive is Szelogowski's Figure 6. He takes a pentagram of circles, and repeats that pentagram about 13 times, with variations of how the circles are colored. It's another very crude "finger in the sand" kind of diagram suggesting that Szelogowski has no substantive understanding of how a brain could store a memory or preserve a memory for a lifetime or instantly retrieve a memory. 

Were anyone to ever explain how a brain could store memories and allow for instant memory retrieval, they would have to pay very much attention to speed. One of the biggest reasons why a brain cannot be the storage place of human memories is that humans can remember just the right information instantly, upon seeing a face or hearing a name. But there is nothing in a brain that can account for such blazing speed. We know the type of things that make possible fast retrieval in products humans make: things such as addresses, sorting and indexes. The brain has no addresses, no sorting and no indexes. So when a human instantly recalls many relevant facts after hearing a single name such as "Obama" or "Napoleon,"  that cannot be the result of brain activity. 

In this regard Szelogowski fails entirely. His paper makes zero uses of the word "speed," and has zero substantive references to the topic of  speed. The paper fails to even explain how so small an item as the word "cat" could be converted to some neuron state or synapse state. 

But think for a moment of how utterly impossible it could be to explain how a brain could encode (translate into neural and synapse states) all the types of things humans can learn and remember, which includes all of these types of things:
  • Memories of daily experiences, such as what you were doing on some day
  • Facts you learned in school, such as the fact that Lincoln was shot at Ford's Theater
  • Sequences of numbers such as your social security number
  • Sequences of words, such as the dialog an actor has to recite in a play
  • Sequences of musical notes, such as the notes an opera singer has to sing
  • Abstract concepts that you have learned
  • Memories of particular non-visual sensations such as sounds, food tastes, smells, pain, and physical pleasure
  • Memories of how to do physical things, such as how to ride a bicycle
  • Memories of how you felt at emotional moments of your life
  • Rules and principles, such as “look both ways before crossing the street”
  • Memories of visual information, such as what a particular person's face looks like

Below are some quotes:
  • "There is no such thing as encoding a perception...There is no such thing as a neural code...Nothing that one might find in the brain could possibly be a representation of the fact that one was told that Hastings was fought in 1066." -- M. R.  Bennett, Professor of Physiology at the University of Sydney (link).
  • "No sense has been given to the idea of encoding or representing factual information in the neurons and synapses of the brain." -- M. R. Bennett, Professor of Physiology at the University of Sydney (link).
  • "How the brain stores and retrieves memories is an important unsolved problem in neuroscience." --Achint Kumar, "A Model For Hierarchical Memory Storage in Piriform Cortex." 
  • "We are still far from identifying the 'double helix' of memory—if one even exists. We do not have a clear idea of how long-term, specific information may be stored in the brain, into separate engrams that can be reactivated when relevant."  -- Two scientists, "Understanding the physical basis of memory: Molecular mechanisms of the engram."
  • "There is no chain of reasonable inferences by means of which our present, albeit highly imperfect, view of the functional organization of the brain can be reconciled with the possibility of its acquiring, storing and retrieving nervous information by encoding such information in molecules of nucleic acid or protein." -- Molecular geneticist G. S. Stent, quoted in the paper here
  • "Up to this point, we still don’t understand how we maintain memories in our brains for up to our entire lifetimes.”  --neuroscientist Sakina Palida.
  • "The available evidence makes it extremely unlikely that synapses are the site of long-term memory storage for representational content (i.e., memory for 'facts'’ about quantities like space, time, and number)." --Samuel J. Gershman,  "The molecular memory code and synaptic plasticity: A synthesis."
  • "Synapses are signal conductors, not symbols. They do not stand for anything. They convey information bearing signals between neurons, but they do not themselves convey information forward in time, as does, for example, a gene or a register in computer memory. No specifiable fact about the animal’s experience can be read off from the synapses that have been altered by that experience.” -- Two scientists, "Locating the engram: Should we look for plastic synapses or information- storing molecules?
  • " If I wanted to transfer my memories into a machine, I would need to know what my memories are made of. But nobody knows." -- neuroscientist Guillaume Thierry (link). 
  • "While a lot of studies have focused on memory processes such as memory consolidation and retrieval, very little is known about memory storage" -- scientific paper (link).