Monday, September 16, 2024

No, They Didn't Find in a Brain "3 Copies of Every Memory," and They Never Even Found One

In my post "Why the Academia Cyberspace Profit Complex Keeps Giving Misleading Brain Research Reports" I discussed the economic reasons why we keep getting misleading research about brains, and misleading headlines about brain research. The analysis in that post holds true not just for brain research, but for scientific research in general. We live in an economy in which misleading stories about scientific research and groundless but interesting-sounding scientific speculation are highly incentivized. To give a short synopsis of what I discussed at much greater length in that post, the economic motivations are like this:

(1) Scientists are judged by how many papers they publish and how many citations such papers get.

(2) Because of publication bias (in which papers reporting positive results and particularly interesting-sounding positive results are more likely to be published), scientists are strongly motivated to publish papers claiming positive results and also claiming interesting-sounding results.

(3) Wishing to make themselves appear like sources of important research breakthroughs to help justify their exorbitant tuition, universities are motivated to produce press releases exaggerating the importance of research papers published by their professors.

(4) Since science news is published on web pages with ads that generate revenue for the people running or funding the web pages, with revenue proportional to how interesting-sounding a story is, those running science news web sites or science analysis web sites have an enormous economic motivation to create clickbait headlines that generate higher numbers of page views, and more advertising revenue. Science news sites these days are almost always built in the form of headlines that you must click to read the story, and each time this causes a web page with ads to appear, the people running or funding the site get money from views of the ads displayed on the page you opened up.

The result of all of this is a very wacky world we might call the world of scitainment, to coin a word that combines the words "science" and "entertainment." Scitainment is a part of the internet that blends science and entertainment. Very much of what we read in this strange world of scitainment is true, and very much of it is false. The world of scitainment blends fact and fantasy, always trying its best to produce entertaining stories and clickbait headlines. It's all about luring you in to click on the stories, so that you go to pages that generate ad revenue for the people running the web sites. 

science hype

One of the web sites involved in pushing scitainment is the ad-heavy site www.livescience.com, where we have many science headlines that simply are not true. To give some examples:

  • On the Livescience site we had the utterly untrue headline "Building blocks of life' discovered on Mars in 10 different rock samples." The story discusses some observations of biologically irrelevant chemicals on Mars, none of which are ingredients of life or building blocks on life.  
  • The same Livescience site had an article claiming a woman was hit by a meteorite while drinking coffee outside, although a space.com story tells us no such thing happened. 
  • A story at the LiveScience site was entitled " 'This might be the seeds of life': Organic matter found on asteroid Ryugu could explain where life on Earth came from." The story was rubbish for several reasons: (1) Scientists do not believe that life ever existed on the asteroid  Ryugu or on any other asteroid. (2) There is no scientific concept of any such thing as a "seed of life," in the sense of something causing life to arise from non-life (with the exception of plant seeds, and plant seeds were not found on Ryugu).  (3) No actual components of life were found on the asteroid Ryugu, and most organic molecules are not components of life. 

  • Another story at the LiveScience site referred to a claimed discovery of the simplest amino acid (uracil) on an asteroid, in the faintest trace amount of only 13 parts per billion. The headline at the LiveScience site made the very untrue claim that this "could explain the origin of life." Living things require twenty types of amino acids, which must be massively arranged in very specially ordered arrangements to make many types of the very hard-to-achieve molecules called proteins.  The discovery of one type of amino acid in the faintest trace amounts no more explains the origin of life than the discovery of a twig on the ground (making the letter "I") explains the origin of books consisting of vey much well-constructed prose. 

  • Another article on the LiveScience site was devoted to selling the groundless idea that there is a "dark mirror" universe inside ours. 

  • Another article on the LiveScience site had the nutty title "The 1st life in the universe could have formed seconds after the Big Bang."  Anyone familiar with the incredibly high temperatures and density at such a time (preventing all chemistry and even the existence of atoms) should understand how crazy such a claim is. 

  • Another article on the LiveScience site had the phony title "Here's what we learned about aliens in 2020," a reference to extraterrestrials. Of course, we did not learn anything about extraterrestrials in that year. 

  • Another article on the LiveScience site had the phony title "These weird lumps of 'inflatons' could be the very first structures in the universe."  We saw a visual of some strange structure that looked like a planetary nebula. The caption read, "Shown here, one of the dense clumps of inflatons that emerged during the inflation phase of the Big Bang, in the infant universe."  The caption led the reader to believe he was looking at some photo of something in space.  But the photo was not a photo of anything observed in space.  It was merely a photo of some junk generated by an entirely speculative computer program. No actual "inflatons" have ever been observed, and the program was based on one of the innumerable speculative models of the unproven cosmic inflation theory.


As the examples above show, you should not assume a claim is true merely because you read a headline suggesting it is true at the LiveScience site at www.livescience.com.   The latest example of a misleading headline at the site is an article with the groundless headline "The brain stores at least 3 copies of every memory." Human beings recall things, but no scientist has ever discovered even one memory in a brain. 

You can pretty much figure out that the story is baloney the moment you read that the research discussed is merely research based on mice rather than humans.  Letting our imaginations run wild, we can imagine some investigator of human brain tissue confirming the claim that brains keep three different copies of each memory. For example, an investigator might keep scanning the brain of a dead person, and then announce something like, "I found the words 'the battle of Hastings occurred in 1066' in three different spots of the brain." But we can imagine no possible observations of mice brains that would ever justify the claim that a memory was stored in a mouse brain.  For example, a researcher could never announce that he found the words "mouse traps are dangerous" in some part of a mouse brain, simply because mice don't use language. 

Misspeaking both in its headline and in its text, the article says, "The scientists found that, in rodents, the brain stores at least three copies of a given memory, encoding it in multiple places in the organ." No, scientists found no such thing. The article refers to the junk-science paper "Divergent recruitment of developmentally defined neuronal ensembles supports memory dynamics." It is true that in the abstract of the paper the authors claim " we discovered that memory encoding resulted in the concurrent establishment of multiple memory traces in the mouse hippocampus."  But because the authors used very bad research practices, they provided not the slightest bit of robust evidence for such a claim. 

The paper is behind a paywall, but anyone can read a preprint of the paper that allows us to see the Questionable Research Practices that were used.  The defects are as follows:

(1) The study group sizes were way-too-small, consisting of groups such as only 4 mice or only 5 mice or only 8 mice or only 10 mice. No one should take seriously any experimental rodent research study using fewer than 15 rodents in each study group, and for most effect sizes a larger study group size such as 30 mice is needed. The authors would have discovered the inadequacy of their study group sizes if they had done a sample size calculation like good scientists, but they failed to do that. The paper "Prevalence of Mixed-methods Sampling Designs in Social Science Research" has a Table 2 giving recommendations for minimum study group sizes for different types of research. The minimum number of subjects for an experimental study is 21 subjects per study group. 

minimum sample sizes


(2) We hear no discussion of the following of a detailed blinding protocol, something that would need to exist for a study like this to be taken seriously. The only mention of a blinding procedure is the mere remark that "To reduce potential bias in the analysis, the researcher conducting the analysis was blind to the experimental
group to which animals belonged until after the data analysis was completed."  When you are using very small study group sizes such as only 4 mice or only 8 mice, it is typically the case that mice can be recognized visually, meaning a researcher can tell things he was not told, such as whether a mouse was in a control group.  Serious use of a blinding protocol requires a careful protocol that would require at least a long paragraph to state, and we have no evidence of such a thing in this paper. 

(3) The experiment was thoroughly entangled with the use of a worthless technique for measuring memory recall in mice, the defective technique of trying to judge "freezing behavior" in mice.  The preprint paper uses the word "freezing" 77 times, to show how the experiments were thoroughly dependent upon the use of such a technique. All experimental neuroscience papers depending on such judgments of "freezing behavior" are junk science papers. 

"Freezing behavior" judgments work like this:

(1) A rodent is trained to fear some particular stimulus, such as a red-colored shock plate in his cage. 

(2)  At some later time (maybe days later) the same rodent is placed in a cage that has the stimulus that previously provoked fear (such as the shock plate). 

(3) Someone (or perhaps some software) attempts to judge what percent of a certain length of time (such as 30 seconds or 60 seconds) the rodent is immobile after being placed in the cage. Immobility of the rodent is interpreted as "freezing behavior" in which the rodent is "frozen in fear" because it remembered the fear-causing stimulus such as the shock plate. The percentage of time the rodent is immobile is interpreted as a measurement of how strongly the rodent remembers the fear stimulus. 

This is a ridiculously subjective and inaccurate way of measuring whether a rodent remembers the fear stimulus. There are numerous problems with this technique:

(1) There are two contradictory ways in which a rodent might physically respond after seeing something associated with fear: a flight response (in which the rodent attempts to escape) and a freezing response (in which the rodent freezes, not moving). It is all but impossible to disentangle which response is displayed when the rodent is presented with a fear stimulus. A rodent who remembers a fear stimulus might move around trying to escape the feared stimulus. But under the "freezing behavior" method, such movement would not be recorded as memory of the feared stimulus, even though the fear stimulus was recalled. 

(2) Rodents often have hard-to-judge movement behavior that neither seems like immobility nor fleeing behavior, and it is subjective and unreliable to judge whether such movement is or is not "freezing behavior" or immobility. 

(3) Movement of a rodent in a cage may be largely random, and not a good indication of whether the rodent is afraid and whether the rodent is recalling some fear stimulus. 

(4) Rodents encountering a fear-provoking stimulus in human homes (such as a mouse hearing a human shriek) almost never display freezing behavior, and much more commonly display fleeing behavior. I lived in a New York City apartment for many years in which I would suddenly encounter mice, maybe about 10 times a year. I never once saw a mouse freeze when I shrieked upon seeing it, but invariably saw the mouse flee. 

(5) Freezing behavior in a rodent may  last for a mere instant, as in humans. So it may be extremely fallacious to do something such as trying to observe 30 seconds or 60 seconds of rodent movement or non-movement, and try to judge whether fear or recall occurred  by judging a "freezing percentage" over such an interval. Almost all of that time may be random behavior having nothing to do with fear in the rodent or memory recall in the rodent. 

For experiments not involving recall of a fearful stimulus, the Morris Water Maze test can be used to reliably measure recall in rodents. There are two reliable ways to measure fear recall in rodents. The first is to measure heart rate, which very dramatically spikes in rodents when they are afraid. The second is to measure an avoidance of a fearful stimulus.  The simple technique is illustrated in the visual below:

But instead of using such reliable techniques, our neuroscientists continue to use the very unreliable technique of trying to judge recall of fear-related memories in animals by making subjective judgments of "freezing behavior." Why would they continue to use so stupid and unreliable a technique? I can think of two reasons:

(1) Neuroscientists are People of Custom just like Roman Catholic priests are People of Custom. So neuroscientists may keep using some very old and ineffective technique as a matter of "clinging to the old custom," rather like the way Roman Catholic priests kept reciting the Mass in Latin very long after almost no one understood Latin. 

(2) Neuroscientists may prefer to use an unreliable technique for measuring fear-related memory recall in rodents, because using that bad technique increases the chance of them producing research papers that report invalid but interesting-sounding results consistent with "brains store memories" dogmas.  Similarly, if a researcher uses an unreliable technique for detecting heat traces in clouds, it will increase the chance that he can end up with some paper claiming to show heat blips in clouds that he may claim as evidence for extraterrestrial spaceships in the sky. The unreliable measurement technique is the best friend of the person trying to support untrue claims. 

Thoroughly dependent on a bad measurement technique for judging whether rodents recalled a fearful stimulus, and also involving way-too-small study group sizes such as only 4 or 5 rodents, the low-quality science paper "Divergent recruitment of developmentally defined neuronal ensembles supports memory dynamics" has provided zero robust evidence that there is a copy of a memory in any brain. No such robust evidence has ever been provided by neuroscientists. As discussed in my post here, the quickly-preserved brains of thousands of people have been thoroughly studied by different "brain bank" projects, and by microscopic examination no one ever found the slightest evidence of a memory stored in a brain. Never through microscopic examination of a brain has even a single piece of information as small and humble as "birds fly" or "dogs bark" or "Earth has a moon" ever been found. nor has anyone ever found in any brain by microscopic examination even the crudest or blurriest  image of anything anyone saw. 

typical neuroscience press release
Click on the image to read it better

typical neuroscience paper

Postscript:  When "freezing behavior" judgments are made, there are no standards in regard to how long a length of time an animal should be observed when recording a "freezing percentage"  (a percentage of time the animal was immobile). An experimenter can choose any length of time between 30 seconds and five minutes or more (even though it is senseless to assume rodents might "freeze in fear" for as long as a minute).  Neuroscience experiments typically fail to pre-register experimental methods, leaving experimenters free to make analysis choices "on the fly," after they have gathered data. So you can imagine how things might work. An experimenter might judge how much movement occurred during five minutes or ten minutes after a rodent was exposed to a fear stimulus. If a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 30 seconds, then 30 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 60 seconds, then 60 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over two minutes, then two minutes would be chosen as the interval to be used for a "freezing percentage" graph. And so on and so forth, up until five minutes or ten minutes. Such shenanigans drastically depart from good, honest, reliable experimental methods. 

The paper discussed above did not pre-register any methods, so after gathering data the experimenters were free to analyze the data in any way they pleased. Some of their "freezing behavior" graphs are made using a time interval of three minutes, and others are made using a time interval of five minutes. Genuine fear-freezing in an animal would be something lasting only a few seconds. The longer the interval of time used as a basis for a "freezing behavior" graph, the more unreliable freezing behavior judgments are as a measurement of fear recall. When an interval of longer than 30 seconds is used as the basis for a "freezing percentage" graph, then you have a particularly unreliable and particularly deplorable use of such a technique; and the longer the time interval is above 30 seconds, the more unreliable and deplorable are claims that such graphs are measurements of how well an animal recalled something. 

Monday, September 9, 2024

Neuroscientists Senselessly Think They Can Perform Innumerable Contortions of Brain Data, and Then Claim a Discovery

Claims by neuroscientists that they have found "representations" in the brain (other than genetic representations) are examples of what very abundantly exists in biology: groundless achievement legends. There is no robust evidence for any such representations. 

Excluding the genetic information stored in DNA and its genes, there are simply no physical signs of learned information stored in a brain in any kind of organized format that resembles some kind of system of representation. If learned information were stored in a brain, it would tend to have an easily detected hallmark: the hallmark of token repetition.  There would be some system of tokens, each of which would represent something, perhaps a sound or a color pixel or a letter. There would be very many repetitions of different types of symbolic tokens.   Some examples of tokens are given below. Other examples of tokens include nucleotide base pairs (which in particular combinations of 3 base pairs represent particular amino acids), and also coins and bills (some particular combination of coins and bills can represent some particular amount of wealth). 

symbolic tokens

Other than the nucleotide base pair triple combinations that represent mere low-level chemical information such as amino acids, something found in neurons and many other types of cells outside of the brain, there is no sign at all of any repetition of symbolic tokens in the brain. Except for genetic information which is merely low-level chemical information, we can find none of the hallmarks of symbolic information (the repetition of symbolic tokens) inside the brain. No one has ever found anything that looks like traces or remnants of learned information by studying brain tissue. If you cut off some piece of brain tissue when someone dies, and place it under the most powerful electron microscope, you will never find any evidence that such tissue stored information learned during a lifetime, and you will never be able to figure out what a person learned from studying such tissue.  This is one reason why scientists and law enforcement officials never bother to preserve the brains of dead people in hopes of learning something about what such people experienced during their lives, or what they thought or believed, or what deeds they committed.    

But despite their complete failure to find any robust evidence of non-genetic representations in the brain, neuroscientists often make groundless boasts of having discovered representations. What is going on is pareidolia, people reporting seeing something that is not there, after wishfully analyzing large amounts of ambiguous and hazy data. It's like someone eagerly analyzing his toast every day for years, looking for something that looks like the face of Jesus, and eventually reporting he saw something that looked to him like the face of Jesus.  It's also like someone walking in many different forests, eagerly looking for face shapes on trees, and occasionally reporting a success, or like someone scanning the sky, looking for clouds that look like animal shapes.

pareidolia

The latest example of nonsensical neuroscientist pareidolia is to be found in a press release from Columbia University, and the paper that press release describes in very misleading terms.  The press release has the phony headline "
Scientists Capture Clearest Glimpse of How Brain Cells Embody Thought." When you read a headline like that, you should remember a sad truth that has been glaringly obvious for many years now: the fact that university press releases on topics of science are no more trustworthy than corporate PR press releases.  There is the most gigantic amount of lying, hype and misrepresentation in university press releases these days, and such baloney occurs in equal amounts in the press releases of every major university. So don't think for a moment than you can  trust a press release because it came from Harvard or Columbia or Yale or Oxford University. I wish I had a dollar for every bogus press release that has been issued by such institutions. The subtitle of the press release is the 100% untrue claim "Recordings from thousands of neurons reveal how a person’s brain abstractly represents acts of reasoning."

We have an utterly groundless claim by a neuroscientist that he and his colleagues found a “uniquely revealing dataset that is letting us for the first time monitor how the brain’s cells represent a learning process critical for inferential reasoning." We then have an equally groundless claim by another neuroscientist that "this work elucidates a neural basis for conceptual knowledge, which is essential for reasoning, making inferences, planning and even regulating emotions.”

Do the authors claim to have seen some structure in the brain corresponding to these claims? Certainly not. They did not do any brain imaging such as using MRI scans. All they had to work with is EEG readings, readings of brain waves. Would anyone have seen any sign of such claimed representations by visually examining the wavy lines of these EEG readings? Certainly not. 

The press release reveals that what is going on is an affair that can be described as "keep torturing the data until it confesses in the weakest voice." We read this:

"The researchers recast the volunteers’ brain activity into geometric representations – into shapes, that is – albeit ones occupying thousands of dimensions instead of the familiar three dimensions that we routinely visualize. 'These are high-dimensional geometrical shapes that we cannot imagine or visualize on a computer monitor,' said Dr. Fusi. 'But we can use mathematical techniques to visualize much simplified renditions of them in 3D.' ” 

What a joke. They didn't see any such representations of knowledge or thought by a simple examination of the brain wave data they acquired. So they kept fiddling with their data and manipulating the data and contorting the data with some kind of absurdly convoluted and byzantine analysis pathway, and then claimed to see representations or shapes in such super-manipulated data. It's like someone taking 1000 pictures of the clouds in the sky, and then playing around all day with image manipulation filters until he got something that looked like an animal shape in one of the clouds. 

The paper being described (which you can read here) describes the huge chain of arbitrary manipulations and arbitrary naming that went on. It's a laughably arbitrary and byzantine analysis pathway in which many dozens of arbitrary analysis decisions are being made. 

spaghetti code neuroscience analysis

Here from the paper is a description of just a small fraction of the "keep torturing the data until it confessed" spaghetti code craziness that went on:

" A cross-session-group PS was then computed by applying the same alignment to a pair of held-out conditions, one on either side of the current dichotomy boundary. Alignment and cross-group comparisons were performed in a space derived using dimensionality reduction (six dimensions). For a given dichotomy, two groups of sessions with N and M neurons were aligned by applying singular value decomposition to the firing-rate normalized condition averages of all but two of the eight task conditions, one on either side of the dichotomy boundary. The top six singular vectors corresponding to the non-zero singular values from each session group were then used as projection matrices to embed the condition averages from each session group in a six-dimensional space. Alignment between the two groups of sessions, in the six-dimensional space, was then performed by computing the average coding vector crossing the dichotomy boundary for each session group, with the vector difference between these two coding vectors defining the ‘transformation’ between the two embedding spaces. To compare whether coding directions generalize between the two groups of sessions, we then used the data from the two remaining held-out conditions (in both session groups). We first projected these data points into the same six-dimensional embedding spaces and computed the coding vectors between the two in each embedding space. We then applied the transformation vector to the coding vector in the first embedding space, thereby transforming it into the coordinate system of the second session groups. Within the second session group embedding space, we then computed the cosine similarity between the transformed coding vector from the first session group and the coding vector from the second session group to examine whether the two were parallel (if so, the coding vectors generalize). We repeated this procedure for each of the other three pairs of conditions being the held-out pair, thereby estimating the vector transformation of each pair of conditions independently. The average cosine similarity was then computed over the held-out pairs. All possible configurations of conditions aligned on either side of the dichotomy boundary are considered (24 in this case), and the maximum cosine similarity over configurations is returned as the PS for that dichotomy (plotted as ‘cross-half’ in Extended Data Fig. 3z)."

This is only a very small fraction of the "manipulate the data like crazy" nonsense that was going on. The paper lists a total amount of statistical rigmarole that seems ten times more complicated than what is described in the quote above. Never be impressed when you read about such operations. The more complicated such paragraphs are, the more it shows that the original data did not have the final result claimed, and that the authors had to play "keep torturing the data until it confesses" games, using a long series of arbitrary data manipulations  and data contortions to try to "gin up some success."  Strangely, we read almost nothing in the way of justifying these bizarre data manipulations and data contortions. It is as if the authors thought they had the right to dream up the most enormously  convoluted data manipulation scheme, without justifying the bizarre data distortions they were doing. 

A look at some of the programming code used shows that all the data was being passed through doubly-nested loops that were doing God-only-knows-what:

%for every area in the dataset
for i = 1:length(cell_groups)
    
    %run analysis for the inference absent sessions
    idx_current = intersect(cell_groups{i},find(ismember(sessions,inference_absent)));

    for i_rs = 1:n_resample
        [avg_array,~] = construct_regressors(neu,n_samples(i),idx_current);
    
        [t_1,t_2]        = sd(avg_array,n_perm_inner,n_samples(i));
        sd_{i,1}         = cat(2,sd_{i,1},t_1);
        sd_boot{i,1}     = cat(2,sd_boot{i,1},t_2);
        [ccgp_{i,1}]     = cat(2,ccgp_{i,1},ccgp(avg_array,...
                           n_perm_inner,false,n_samples(i)));
        [ccgp_boot{i,1}] = cat(2,ccgp_boot{i,1},ccgp(avg_array,...
                           n_perm_inner,true,n_samples(i)));
        ps_{i,1}         = cat(2,ps_{i,1},ps(avg_array,...
                           n_perm_inner,false));
        ps_boot{i,1}     = cat(2,ps_boot{i,1},ps(avg_array,...
                           n_perm_inner,true));
    end
    
    %run again for the inference present sessions
    idx_current = intersect(cell_groups{i},find(ismember(sessions,inference_present)));

    for i_rs = 1:n_resample
        [avg_array,~] = construct_regressors(neu,n_samples(i),idx_current);
    
        [t_1,t_2]        = sd(avg_array,n_perm_inner,n_samples(i));
        sd_{i,2}         = cat(2,sd_{i,2},t_1);
        sd_boot{i,2}     = cat(2,sd_boot{i,2},t_2);
        [ccgp_{i,2}]     = cat(2,ccgp_{i,2},ccgp(avg_array,...
                           n_perm_inner,false,n_samples(i)));
        [ccgp_boot{i,2}] = cat(2,ccgp_boot{i,2},ccgp(avg_array,...
                           n_perm_inner,true,n_samples(i)));
        ps_{i,2}         = cat(2,ps_{i,2},ps(avg_array,...
                           n_perm_inner,false));
        ps_boot{i,2}     = cat(2,ps_boot{i,2},ps(avg_array,...
                           n_perm_inner,true));
    end

end

Every single piece of data is being passed into a function called construct_regressors(), but what is that function doing? We cannot tell, because the code for that function has not been supplied.  We should be suspicious that this construct_regressors() function was doing something so arbitrary and convoluted that the authors were embarrassed to publish the code for that function. 

What we have here is something like the situation described in the visual below:

What we have here is something like the situation described in the visual below:

keep torturing the data till it confesses

To pass off the results of so vast an amount of data monkeying as a discovery is a case of BS and baloney. No representations in the brain or brain waves have been discovered here. All we have is scientists manipulating and contorting data like crazy, and then displaying some pareidolia by passing off their super-manipulated data as an example of "representations."  

Can you imagine what a scandal would arise if climate scientists tried to get away with even one tenth of this amount of manipulating and contorting and distorting their data? Skeptics of their work would start "screaming bloody murder," and howl about how scientists were failing to use their original data, and using instead manipulated, contorted, twisted, distorted data.  But it seems that neuroscientists senselessly think that it is okay for them to play around endlessly with data from brains,  and that they have the right to contort and twist and distort such data in dozens of different ways, and then pass off the result (an utterly artificial construction) as something they can then call "what the brain does."  

What can you call data like this which has undergone so many contortions and manipulations and distortions and transfigurations that it is something almost totally different from the raw data originally gathered? You might be tempted to call it "fake data," but that isn't quite right, because the authors have described all the transformations they did of the data. So rather than calling it "fake data," a description that is not quite right, we can merely say that it is data that has been so enormously contorted and manipulated and transfigured that it is data that cannot be claimed as evidence telling us about brain states or brain activity. 

Experimental neuroscience is in a state of great sickness and dysfunction. The use of Questionable Research Practices seems like more the rule in experimental neuroscience than the exception. When scientists think that it is okay for them to perform endless manipulations and contortions and distortions and transfigurations of their data, and to then pass off the resulting artificial mess as "what we got from the brain," then it is a sign that neuroscience has sunk to an extremely low nadir of dysfunction. 

Brain waves don't represent anything that someone has learned. Brain waves are no more representations of something learned than clouds are representations of something learned. Brain waves are streams of random data, as random as the stream of clouds passing above a house or  a city.  Not one iota of evidence of brain representations has been presented in this paper. The paper is entitled "Abstract representations emerge in human hippocampal neurons during inference." An honest title for the paper would have been "We got something we called 'abstract representations'  after we manipulated and contorted brain wave readings in dozens of weird ways."

data manipulation madness
Contortion craziness 

Monday, September 2, 2024

UK Biobank Study of Thousands of Brains Finds Negligible Correlation Between Brain Size and Intelligence

I previously described a study published in the January 2021 volume of the journal Cerebral Cortex, one entitled "Is There a Correlation Between the Number of Brain Cells and IQ?" The authors (Nicharatch Songthawornpong, Thomas W Teasdale, Mikkel V Olesen, and Bente Pakkenberg) examined 50 brains of Danish males who had died for reasons other than brain disease. It was possible to reliably estimate the IQ of these Danish males because they all had taken a military mental performance test that very highly correlates with IQ, and is essentially an intelligence test. 

The paper very clearly states its results:

""In our sample of 50 male brains, IQ scores did not correlate significantly with the total number of neurons (Fig. 1A), oligodendrocytes (Fig. 1B), astrocytes (Fig. 1C) or microglia (Fig. 1D) in the neocortex, nor with the cortical volume (Fig. 2A), surface area (Fig. 2B) and thickness (Fig. 2C). This also applied to estimates of the four separate lobes (frontal-, temporal-, parietal-, and occipital cortices; see Supplementary Material). Neither did IQ score correlate significantly with the volumes of white matter (Fig. 2D), central gray matter (Fig. 2E) or lateral ventricles (Fig. 2F), nor with the brain weight (Fig. 3A), or body height (Fig. 3B). All of these correlation coefficients were less than 0.2."

What this means is that the authors found:

  • It is not at all true that the more brain cells you have, the more likely you are to be smart.
  • It is not at all true that the more gray matter in your brain, the more likely you are to be smart.
  • It is not at all true that the more white matter in your brain, the more likely you are to be smart.
  • It is not all true that the heavier your brain, the more likely you are to be smart.

Although such results do not by themselves show that your brain is not the source of your mind, such results are quite compatible with the hypothesis that your brain is not the source of your mind. But the message coming from the study is not as loud as it might be, given the rather small sample size of only 50 brains.  Much larger studies have been done using scans of thousands of brains. 

The brain scans were done as part of the UK Biobank project. In that project about 29,000 subjects had their brains scanned, and about 7,000 of them also performed four cognitive tests. The study "Structural brain imaging correlates of general intelligence in UK Biobank" (which you can read here) analyzed performance on such tests, and estimated a general intelligence (which it called g) for each of several  thousand people taking the tests, all of whom had their brains scanned. The study then estimated what the correlation was between things such as intelligence and brain volume. The study found a correlation of only .276 between brain volume and intelligence, and a correlation of only .281 between gray matter volume and intelligence. 

How high a correlation is that? In the scientific paper entitled, “A guide to appropriate use of Correlation coefficient in medical research,” we have a Table 1 which has the heading of "Rule of Thumb for Interpreting the Size of a Correlation Coefficient." Here is that table:

Size of CorrelationInterpretation
.90 to 1.00 (−.90 to −1.00)Very high positive (negative) correlation
.70 to .90 (−.70 to −.90)High positive (negative) correlation
.50 to .70 (−.50 to −.70)Moderate positive (negative) correlation
.30 to .50 (−.30 to −.50)Low positive (negative) correlation
.00 to .30 (.00 to −.30)negligible correlation

So by finding a correlation of only .276 between brain volume and intelligence, the study with a very big sample size of about 15,000 subjects has found only a negligible correlation between brain size and intelligence. And by finding a correlation of only .281 between gray matter volume and intelligence, the study with a very big sample size of about 15,000 subjects has found only a negligible correlation between gray matter volume and intelligence.

We have a result here quite compatible with the idea that your brain is not the source of your mind. Similarly, the 2019 study discussed here studied the brains of 324 people by brain scanning, and found that neither knowledge nor intelligence had any clear relation to brain parameters. 

The authors of the study "Structural brain imaging correlates of general intelligence in UK Biobank" (which you can read here) seem to have made some arbitrary choices about how to analyze the data they had. In the total UK Biobank data there were more than 10,000 subjects who had their brains scanned and who also did a Verbal Numerical Reasoning test. But the study authors chose to use only about 7000 of those subjects, only those who had taken four cognitive tests.  They also made the arbitrary decision to use only Part B of a Trail Making Test (a type of cognitive test) rather than the full data gathered (including a Part A and Part B). We can assume that the authors were trying to do an analysis that would show as high a correlation between brain volume and intelligence as they could get; but they have still only reported a negligible correction of .276.  A different analysis of the same UK Biobank would have shown an even lower correlation between brain size and intelligence.  This is proven by the quote below in the paper:

"Using an earlier data release (Ritchie et al., 2018), we previously estimated the correlation between brain size and one of those tests, 'Fluid Intelligence' (which we refer to as Verbal-Numerical Reasoning) to be r = 0.177. We found that the correlation did not differ by sex. Another study using an earlier release of UK Biobank imaging data examined the association between Verbal-Numerical Reasoning and brain size, reporting a correlation of r = 0.19 (N = 13,608; Nave, Jung, Linnér, Kable, & Koellinger, 2019)."

The second paper referred to is "Are Bigger Brains Smarter? Evidence From a Large-Scale Preregistered Study" which you can read here.  The authors of that paper falsely described the negligible correlation they found between brain volume and fluid intelligence, describing the negligible correlation they found of only .19 as "robust." As the table above tells us, all correlations of less than .30 are properly described as "negligible."

Different types of cognitive tests are usually measurements of things more than just intelligence, because the scores in the test scores can be influenced by things  such as manual dexterity, visual perception and a tendency of an unmotivated mind to wander. The type of very low correlations reported above can be easily accounted for under the idea that your brain is not the source of your mind, by supposing that slight differences in things such as manual dexterity and visual perception are showing up in the test scores. 

things affecting IQ scores