Sunday, December 14, 2025

Pareidolia Progress: Neuroscientists Get Better and Better at Seeing Things in the Brain That Are Not There

 Our neuroscientists are getting nowhere in trying to show that there is a neural basis for human memory or human thinking. But they have something they can rely on to help hide their lack of progress:  pareidolia. Pareidolia is when you see patterns that aren't really there, like some guy examining his toast every day for years, and then one day saying, "I finally see the face of Jesus in my toast."  A scientist conjuring up some pareidolia can make a nice-sounding progress report when no real progress has been made. I describe some examples of this in my post "Scientists Have a Hundred Ways to Conjure Up Phantasms That Don't Exist." 

pareidolia

There are several factors driving an increase in the ability of neuroscientists to make these pareidolia reports of seeing things in a brain that are not there. They include the following:

(1) Technological improvements have made it more and more easy for neuroscientists to use microelectrodes to record activity from individual neurons. This type of invasive brain intervention yields much more data than you get from the non-invasive technique of reading brain waves by having someone wear an EEG cap with electrodes. Neurons fire randomly at a rate between about 1 time per second and about 300 times per second. Being able to get data from all the individual firings of a large set of neurons randomly firing is almost the perfect seed bed for pareidolia.  The more random, rapidly changing data you get, the easier it is to do noise-mining pareidolia in which you seek for patterns you are eagerly hoping to find. 

(2) Advances in computer programming and AI make it easier-than-ever to create computer programs that manipulate gathered brain data in arbitrary ways. Easy-to-use languages such as Python make it easier for neuroscientists who are not professional programmers to write such programs. AI tools such as ChatGPT allow the creation of artificially generated code that can be used as part of such programs. The easier it is to produce such computer programs, the easier it is to do the kind of "keep torturing the data until it confesses" work that is often the basis of pareidolia, or a pillar of pareidolia. 

(3) Advances in so-called artificial intelligence (AI) and the public accessibility of such techniques make it easier than ever to do noise mining or data mining to extract or construct patterns or claimed patterns that a human might night never be able to find or that humans would never say existed if the humans made a simple, straightforward examination of the data. 

We have a recent example of neuroscientists seeing things that are not really there is the paper "Dynamic coding and sequential integration of multiple reward attributes by primate amygdala neurons" which you can read here. We have two neuroscientists claiming to have found that "neurons frequently signalled reward probability in an abstract, stimulus-independent code." They provide no robust evidence for this claim. To qualify as decent evidence, a study like theirs would have required a study group size of at least 15 subjects. The study group size they used was the way-too-small size of only two subjects, both monkeys. 

Below from Figure 4 of the paper is some of the data gathered from monitoring several hundred neurons of the two monkeys, as they were either resting or presented with some kind of cue. 

The little blips are neuron firing or neuron firing spikes going on while these cues were presented. A human looking at this data will see no pattern. But the authors played around with what we might charitably call "analysis techniques," trying to find some way to squeeze some evidence of a pattern or a code out of this neural noise. In the paper these are called "general linear models" and are given names of GLM 1, GLM 2, GLM 3, GLM 4 and GLM 5. Describing each of these "models," the authors groundlessly claimed that each of them identified a particular neuron encoding in some different way. The claims are never justified, and seem like pure imagination. 

Procedures like this almost invariably involve computer programming code that passes the data through programming loops. It's kind of like what's depicted in the visuals below. 


keep torturing the data until it confesses

Normally you get an idea of how objectionable the data processing was by looking at the programming code used by the scientists. There is no excuse for any study of this failing to publish its code, as there are nowadays various platforms and web sites that make it very easy for someone to make their code publicly available online.  Typically in affairs such as this you can examine the programming code, and find some convoluted poorly documented mess that will demonstrate that the data was being monkeyed with in wild and weird ways. If, on the other hand, you were to find some clean, straightforward, well-documented code, it might be a sign of a respectable methodology. 

But often the scientists will not make their programming code publicly available. This is a strong reason for suspecting that the programming code involved is something that the scientists were embarrassed by, some programming horror they were too embarrassed to publish.  In the case of the paper discussed here ("Dynamic coding and sequential integration of multiple reward attributes by primate amygdala neurons" which you can read here), the authors have failed to make their programming code public.  We should have no confidence in any of the author's claims to have found "probability-coding neurons." but we can assume that the authors were not proud of their own coding. 

The authors are just engaging in the most groundless "Jesus in my toast" pareidolia when they make these claims:

"Amygdala neurons frequently signalled reward probability in an abstract, stimulus-independent code that generalized across cue formats. While some probability-coding neurons were insensitive to magnitude, signalling ‘pure’ probability rather than value, many neurons showed biphasic responses that signalled probability and magnitude in a dynamic (temporally-patterned) and flexible (reversible) value code. Specific neurons integrated these reward attributes into risk signals that quantified the uncertainty of expected rewards, distinct from value."

Making these claims the authors are like someone staring at 1000 photos of clouds in the sky, and finally declaring that he sees angel castles and animal ghosts. Among their procedural sins:

(1) They have not done a study with even one seventh of the study group size needed for a study like this to be taken seriously. 

(2) They have not revealed the exact details of their procedural method, because they failed to publish the programming code that their claims are dependent on. 

(3) Instead of testing a single hypothesis and declaring that it either succeeded or failed, they kept playing around with different ways of analyzing data until they found something they thought produced a publishable result. 

All claims that the authors make about finding neurons "encoding" something are spurious and groundless. Their description of the analysis algorithms they fooled around with is a description of a  tangled, tortured gobbledygook rigamarole, some "witches brew" of statistical spaghetti code shenanigans that the authors give no justification for. You could give the data they recorded to 100 neuroscientists and ask them whether they saw anything in the data; and not one of them would claim that they saw any evidence of "probability encoding neurons" unless you primed the pump by mentioning such a notion to them. 

No comments:

Post a Comment