Tuesday, May 20, 2025

The Groundless Myth of Concept-Selective Regions in the Brain

 Here is one way that a type of scientific myth can arise:

 (1) Scientists very interested in providing evidence for some untrue claim (which they may believe to be true) may do poorly-designed research guilty of various types of Questionable Research practices such as way-too-small study sizes, a lack of pre-registration, a lack of a proper blinding protocol, and poor, unreliable measurement techniques. Such scientists then write up a paper incorrectly claiming that their research supports the  claim. 

(2) Papers such as these get endlessly cited in the popular press and also in other scientific papers. Typically when a scientific paper cites the poor-quality research, no mention will be made of any of the factors that disqualify the cited paper as being an example good robust evidence. Whatever the paper claimed to show will simply be cited by some other paper as a fact that was established by research. 

In my previous post I documented a case of this type of citation bungling going on. In my 2024 post "Papers Claiming Brain Memory Storage Keep Citing Poor Science Papers" I examined each of the references at the end of this sentence in a scientific paper:  "There is now a substantial body of evidence based on recently developed techniques, including optogeneticschemogeneticselectrophysiology, and multiphoton confocal imaging, to suggest that memory for basic types of behavioral learning such as contextual fear conditioning is maintained in a population of neurons referred to as engram cells [4][5][6][7][8]." For each one of the studies cited at the end of this sentence, I wrote a separate paragraph showing that the cited study was not an example of robust scientific research, but instead a low-quality study guilty of various types of Questionable Research Practices such as way-too-small study sizes, a lack of pre-registration, a lack of a proper blinding protocol, and poor, unreliable measurement techniques.

Let us look at another example of this type of misconduct.  In the recent preprint "MINDSIMULATOR: EXPLORING BRAIN CONCEPT LOCALIZATION VIA SYNTHETIC FMRI," which you can read here, we have the following spurious claim:

"Numerous neuroscience studies have illustrated that specific regions of the visual cortex exhibit concept selectivity. When individuals receive visual stimuli related to particular concepts (such as places, bodies, faces, words, colors, and foods), the respective cortical regions exhibit significant activation (Epstein & Kanwisher, 1998; Sergent et al., 1992; Jain et al., 2023; Pennock et al., 2023; Kanwisher et al., 1997; Allen et al., 2022). These regions are termed visual concept-selective regions and play a vital role in advancing the understanding of brain visual cognition."

This claim that there are "concept-selective" regions of the brain has no basis in fact. It is not true that particular regions of the brain become more active when someone is seeing some particular type of visual stimulus.  All of the papers cited above are examples of very low-quality research severely guilty of Questionable Research Practices. Let us look at each of them. 

Before discussing them,  I should explain why brain imaging studies using small study group sizes are worthless. An article on neursosciencenews.com states this: "A new analysis reveals that task-based fMRI experiments involving typical sample sizes of about 30 participants are only modestly replicable. This means that independent efforts to repeat the experiments are as likely to challenge as to confirm the original results."  The paper "Prevalence of Mixed-methods Sampling Designs in Social Science Research" has a Table 2 giving recommendations for minimum study group sizes for different types of research. According to the paper, the minimum number of subjects for an experimental study are 21 subjects per study group. The same table lists 61 subjects per study group as a minimum for a "correlational" study. 

In her post “Why Most Published Neuroscience Findings Are False,” Kelly Zalocusky PhD calculates that the median effect size of neuroscience studies is about .51. She then states the following, talking about statistical power (something that needs to be .5 or greater to be moderately convincing): 

"To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous. To bump our power up to 0.5, we would need an n of 31 per group. A power of 0.8 would require 60 per group."

A study with a statistical power of .5 is considered only modestly replicable, something that will be replicated about half of the time if you try to replicate it.  A study with a statistical power of .8 is considered fairly good evidence.  If we describe a power of .5 as being modestly replicable, it therefore seems that about 31 subjects per study group is needed for an experimental neuroscience study to be worthy of consideration.  

Now let us look at list at the studies cited above as evidence for the claim that there are "concept-selective" regions in the brain. 
  • "Epstein & Kanwisher, 1998":  This is a reference to the paper "A cortical representation of the local visual environment,"  which you can read here. The study used a way-too-small study group size of only 9 subjects. So it provided no real evidence for  "concept-selective" regions in the brain. The reported "superior activations" were only about 1%, which can easily be explained as mere random variations. 
  • "Sergent et al., 1992": This is a reference to the paper "Functional neuroanatomy of face and object processing: a positron emission tomography study," and you can read the abstract here. The paper is behind a paywall, and the abstract makes no mention of the number of subjects used. Given that it is almost invariably true that the abstract of an experimental neuroscience paper will list the number of subjects used whenever it has a halfway-decent study group size, we may presume with high confidence that the study group size was too-small for anyone to be claiming the study as good evidence for concept-selective regions in the brain. 
  • " Jain et al., 2023" :   This is a reference to the paper " Selectivity for food in human ventral visual cortex," which you can read here. The paper claims to have found "two food-selective regions in the ventral visual cortex," but the claim is groundless, because it is based on brain-imaging experiments using a way-too-small study group size of only 8 subjects. 
  • "Pennock et al., 2023":  This is a reference to the study "Color-biased regions in the ventral visual pathway are food selective," which you can read here. It's another paper making claims similar to the Jain paper. But the claims are just as groundless, because they are also based on rain-imaging experiments using a way-too-small study group size of only 8 subjects. 
  • "Kanwisher et al., 1997":  This is a reference to the paper "The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception," which you can read here. The paper used a too-small study group size of only 15 subjects, a size too small for decent evidence to be claimed.  As discussed above, a minimum for a study like this to be taken seriously is about 30 subjects. For a long discussion of the weakness of Kanwisher's research on this topic, read my post here
  • "Allen et al., 2022":  This is a reference to the paper "A massive 7T fMRI dataset to bridge  cognitive neuroscience and artificial intelligence,"  which you can read here. The paper merely describes a dataset created by scanning 8 subjects, and makes no claims that any evidence was produced of concept-selective regions in the brain. 
So it is clear that the paper "MINDSIMULATOR: EXPLORING BRAIN CONCEPT LOCALIZATION VIA SYNTHETIC FMRI," which you can read here, was guilty of citation misconduct. The paper claimed that "numerous neuroscience studies have illustrated that specific regions of the visual cortex exhibit concept selectivity," and cited only the papers in the bullet list above to support this claim. But none of the papers cited provided any good evidence to back up such a claim. 

This type of thing is what constantly occurs in neuroscience literature. Again and again and again we have papers claiming that some grand result was established by neuroscience researchers. There follows a list citing a set of papers. But a careful examination of the papers cited will show that none of them provided any good evidence for the grand result claimed.  The citation of low-quality research is extremely abundant in neuroscience papers. When the citation of low-quality research becomes common, we have a situation in which the neuroscience literature serves to propel and propagate myths and legends, groundless boasts of achievements. 

The practice of citing poor-quality research occurs so abundantly in neuroscience papers that you should never assume the truth of any claim made in a neuroscience paper, merely because it is followed by some list of paper citations that do not discuss the details of the papers cited. When people have good evidence to back up a claim, they tell us about the details of such evidence. A sentence listing a bunch of neuroscience papers without giving us any details about such papers should in general be something regarded with high suspicion. 

If you are writing a neuroscience paper making Claim X, and you know of five well-designed high-quality studies providing strong evidence for Claim X, then you might do something like providing a bullet list in which each of those studies is described as one of the bullets in the bullet list.  You would provide details such as saying "Walker and Miller in 2017 did a well-designed pre-registered study in which a strong Claim X effect was reported in 50 subjects, who were compared to 50 control subjects using a stringent blinding protocol."  Not being worried about your readers reading the studies you mentioned, you would provide links allowing your reader to conveniently open up each of those studies, without copying their titles into the search page of Google Scholar.  

But if you knew of no such high-quality studies, but only low-quality studies, you might merely list those low-quality studies in a single sentence that provided no details about those studies, and had no links to the studies. That way only the most diligent readers of your papers would be able to find out that the studies you had cited were very low-quality studies. If you did that, you would be following the pattern we so often see in neuroscience papers citing poor-quality studies. 



The word "selection" refers to a choice made by a conscious agent. 
There is no robust evidence that any region of the brain is "concept-selective." There is no robust evidence that any region of the brain activates more strongly when certain types of things are shown to a person. People select things, but brain regions don't select things. Claims of "concept-selective" brain regions are another example of biologists making deceptive use of the words "selective" or "selection." Biologists have doing that for well over a century, by using the not-literally-true term "natural selection" to refer to some postulated "survival of the fittest" effect that is not actually selection, because it does not involve a choice by a conscious agent. 

No comments:

Post a Comment