Monday, February 16, 2026

When Neuroscientists Say "Encode," Suspect It's a Load (of BS)

Neuroscientists have no credible story to tell of how a brain could learn anything. Nothing in a brain has the slightest resemblance to a device for storing or retrieving memories or learned information. Humans create various types of objects that store information and allow the retrieval of such information, such as:

(1) a notepad and a pencil;

(2) an old-fashioned cassette tape recorder;

(3) a computer with a hard drive;

(4) printed books; 

(5) a smartphone or digital pad device capable of storing keystrokes. 

So we know the types of things that an object needs to have in order for it be capable of doing a physical long-term storage of learned information and also be capable of rapidly retrieving information. These include things like this (a particular system does not necessarily need all of these things). 

(1) The use of some type of system of encoding whereby learned information can be translated into tokens that can be written on a surface. 

(2) Some type of component capable of writing such tokens to some kind of storage surface (for example, a pencil or the spray unit of an inkjet printer or the read-write head of a hard drive). 

(3) Some surface capable of permanently storing information written to it (for example, paper or the magnetic surface in a hard drive). 

(4) Some material arrangement allowing a sequential retrieval of learned information (for example, the binding of a notebook and the lines on its pages which facilitate sequential retrieval of the stored information, or the physical arrangement in a hard drive that allows data to be retrieved sequentially). 

(5) The use of addresses and indexes that allow an instantaneous retrieval of information. 

(6) Some type of device for retrieving information stored in an encoded format, and converting it to an intelligible form (for example, some computer technology capable of reading magnetic bits, and converting that to readable characters shown on a screen).  

(7) Conversion tables or conversion protocols such as the ASCII code, which constitute a standard method for converting letters into numbers.

(8) Computer software subroutines or functions capable of doing things such as converting text into ASCII decimal numbers, and then converting such decimal numbers into a sequence of binary digits. 

The brain has nothing like any of these things.  So neuroscientists have no credible story to tell of how a brain could learn anything or recall anything. But this does not stop neuroscientists from engaging in BS bluffing, and trying to make it look like they have a little bit of understanding of something they have no understanding of at all. 

The latest example of such BS bluffing is a news article on the often-erring MedicalXPress site, where we often find the most unfounded clickbait headlines claiming grand results not corresponding to anything actually done. It's a very misleading headline of "How the brain learns and applies rules: Sequential neuronal dynamics in the prefrontal cortex." We hear a quote by a scientist who makes a bunch of unfounded claims not matching anything established by his research paper. 

Unfounded boast of scientist

The scientist's paper is a poorly-designed piece of low-quality research entitled "The medial prefrontal cortex encodes procedural rules as sequential neuronal activity dynamics." It's a study involving mice. The first question you should always ask when examining a study like this is: how large were the study group sizes (in other words, how many mice were used for each of the study groups)?  Normally it is easy to find that. You can search in the scientific paper for the phrases "n=" or "n =" which will usually tell you how many mice were used. Or, you can search for the word "mice," and you will typically find a nice clear phrase such as "10 mice," which will tell you how many mice were used for a particular part of an experiment. 

Violating rules of good scientific procedure, the paper fails to ever tell us the number of mice that were used. We have in the paper some 70+ uses of the word "mice," none of which has a number mentioning how many mice were used. Doing the search for the phrases  "n=" or "n =" does not reveal how many mice were used. 

We can rather safely assume that the number of mice used in each study group was some ridiculously too-small number such as only 6 mice per study group. When neuroscientists use halfway-decent study group sizes, they almost always will mention the number of mice used. When neuroscientists fail to use halfway-decent study group sizes, and use ridiculously inadequate study group sizes, they may be too ashamed to state how small was the number of mice they used. 

Going to great efforts which no one should have to go to to get information that the researchers were probably too ashamed to plainly state, information that any good scientific paper should simply plainly state, you can find a statement that allows you to deduce with high likelihood how many mice were used. Going to great labors by wading through the senselessly convoluted mathematics that clutters up this paper, you can find the statement here: " Decoding was conducted for each mouse (mouse IDs 1 to 6) and on specific days (days 1, 2, and 6)."  So it seems only six mice were used. 

The study is therefore a bad example of very low-quality research. No study like this should be taken seriously unless it used at least 15 or 20 subjects per study group. The study also makes no mention of any control subjects, and no mention of any blinding protocol. 

The paper is guilty of ridiculous analytic techniques. We have a long gobbledygook discussion of arbitrary convoluted "maze within a maze within a maze" mathematics that were probably  invented after gathering data, so that some claim could be made that some evidence of encoding of learning had been found. The screen shot below shows only a very small fraction of the murky "down the rabbit hole" labyrinthine rigmarole that was going on:

 An interesting fact is that if you are allowed to engage in unbridled speculation of very high complexity after gathering data, and if you have only a very small study group size, then almost any data can be claimed as evidence of secret encoding. For example:

  • Let us suppose you have data on the exact random locations of the facial pimples of six teenagers with a bad case of acne.
  • Let us suppose you are trying to support some claim that these pimples are encodings of some data related to the girls (maybe encoding of their names or their brother's names or the names of their cats or any number of possible things). 
  • Let us suppose that you are allowed to speculate as much as you want about encoding methods, coming up with any cockamamie scheme of encoding you can imagine, using mathematics as complicated as you wish.

Then, given sufficient labors, and sufficient iterations, you will be able to come up with some speculation of a scheme of encoding that seems to allow you to match up the random pimples on the girl's face with  some type of data item you have chosen.  An important point is that being pure nonsense, the superficial evidence you have provided for this "scheme of encoding" will break down when a much larger data set is used.  So it might be some deal where you have some weak, superficial evidence for some type of "system of encoding" using a data set of only six girls with pimples; but things will break down and your claimed evidence will dissolve when you use a larger data set such as 12 girls with pimples or 20 girls with pimples. 

This is why people producing these kind of BS studies like to use very small study group sizes (such as only 6 mice). The smaller the study group size, the easier it is to produce false alarms, and the easier it is to create "see whatever you hope to see" pareidolia. 

neuroscientist on pedestal
The building blocks of his pedestal

An almost equally bad study claiming something about neural codes is the study "Specialized structure of neural population codes in parietal cortex outputs." It's a study groundlessly claiming evidence that  "Cortical neurons comprising an output pathway form a population code with a unique correlation structure that enhances population-level information to guide accurate behavior." The claim is groundless because the study only used ten mice. And we mostly cannot tell how large were the study group sizes used for each particular part of the  study, because the paper often refers vaguely to "mice," without telling us how many mice were used for a particular part of the experiment. We are told that were "data exclusions" under a rule of "we used mice with greater than 70% behavioral performance." So we cannot tell whether the study group sizes in some cases were much smaller than 10 mice. A "Reporting Summary" checklist at the end of the paper has a checkmark next to the box "The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement." But given the situation described above, the box should not have been checked. 

We have the same type of "maze within a maze within a maze" mathematics used to try to whip up some evidence of a secret code that can only be tortured out of the data. The screen shot below shows a bit of the math gobbledygook:


A look at the programming code suggests a witch's brew of poorly documented code (for example, the files here and here), running many a strange and arbitrary doubly-nested loop to produce some obscure manipulations of the original data.  It all smells very much like a "keep torturing the data until it confesses" affair. 

keep torturing the data until it confesses

The study confesses that "No statistical method was used to predetermine the sample size," a confession also made by the previously discussed study ("The medial prefrontal cortex encodes procedural rules as sequential neuronal activity dynamics"). 

None of the papers I have discussed in this post provide any robust evidence for a discovery of codes used in a brain to transmit or store information. 

Below is a depiction of the system of representations used in the genetic code, by which particular triple combinations of DNA base pairs represent particular amino acids used to make proteins. This is the only scheme of representation that has ever been discovered in the human brain, and it is merely a system for representing low-level chemical information. The evidence for the real existence of this code is rock-solid. Now and then there will appear boasts by neuroscientists of discovering some other system of representation in the human brain, but no such boasts have been well-founded or well-replicated, and none of them hold up well to critical scrutiny. 


No comments:

Post a Comment