Wednesday, April 1, 2026

Neuroscience Research Is Floundering, So a Huge Cash Prize Recently Went to Weak Research

Inaccurate press accounts hailing low-quality research are a central part of the social construction of groundless triumphal legends in neuroscience. Another key element in the social construction of such groundless triumphal legends occurs when the unwarranted claims are repeated in the papers and textbooks and college lectures of neuroscientists and psychologists. A lesser element in the social construction of such groundless triumphal legends is when big prizes go to researchers who did poor research guilty of Questionable Research Practices. Then the researcher can start boasting, "It must be true -- my research got a big prize!"

John O' Keefe published papers in the 1970's and after claiming to have detected "place units" in the hippocampus of rats. The papers also used the term "place cells."  The claim was that certain cells were more active when a rat was in a certain spatial position.  Greater activity during some type of observation is not representation. My eyes may widen if I see a naked woman walking down the street, but that is not a case of my eyes representing the naked woman. It has always been a case of misleading language when neuroscientists attempt to pass off claimed higher activation in some neurons as an example of representation. Real representation involves the use of symbolic tokens. Neuroscientists cannot find any symbolic tokens in the brain, other than the symbolic tokens in DNA that represent amino acids. 

The "place cells" papers of John O'Keefe that I have examined are papers that do not meet standards of good experimental science. An example of such a paper was the paper "Hippocampal Place Units in the Freely Moving Rat: Why They Fire Where They Fire."  For one thing, the study group size used (consisting of only four rats) was way too small for robust evidence to have been produced. 15 animals per study group is the minimum for a moderately convincing result in animal studies looking for correlations.  For another thing no blinding protocol was used. And the study was not a pre-registered study, but was apparently one of those studies in which an analyst is free to fish for whatever effect he may feel like finding after data has been collected, using any of innumerable possible analysis pipelines. 

The visuals in the "place cell" studies done by O' Keefe compared wavy EEG signal lines collected while a rat was in different areas of an enclosed unit. You can see what I'm talking about by looking at page 1334 of the document here. The wavy signal lines look pretty much the same no matter which area the rats were in. But O'Keefe claims to have found differences.  No one should be persuaded that papers using analysis so subjective show robust evidence for an important real effect.  We should suspect that the analyst has looked for stretches of wavy lines that looked different when the rat was in different areas, and chosen stretches of wavy lines that best-supported his claim that some cells were more active when the rats were in different areas. 

When I looked for later "place cell" papers by O'Keefe, I saw papers that seemed to just continue the same Questionable Research Practices. Specifically:

  • A 1993 paper co-authored by O'Keefe was entitled "Phase Relationship Between Hippocampal Place Units and the EEG Theta Rhythm." The paper used way-too-small study group sizes of only three rats and two rats.  No blinding protocol was used, and the paper was not a pre-registered study. We have some wavy-line analysis that seems extremely subjective and arbitrary.
  • A 2008 paper co-authored by O'Keefe was entitled "The boundary vector cell model of place cell firing and spatial memory." The paper used a way-too-small study group size of only two rats. For example, we read "Twenty five place cells were recorded from the two rats."  No blinding protocol was used, and the paper was not a pre-registered study. We should chuckle when the paper says that "we followed 11 cells for time courses varying from a day to the duration of the experiment" and confesses ungrammatically that " it is difficult to draw firm conclusions from such as small data set."  There are millions of cells in the brain of a rat. Paying attention to only a handful of such cells seems like ridiculous cherry picking. 
  • A 2012 paper co-authored by O'Keefe was entitled "How vision and movement combine in the hippocampal place code." The paper used a way-too-small study group sizes of only six mice.  No blinding protocol was used, and the paper was not a pre-registered study. We have some data analysis that seems extremely subjective and arbitrary. 
  • A 2014 paper co-authored by O'Keefe was entitled "Long-term plasticity in hippocampal place-cell representation of environmental geometry." The paper used a way-too-small study group sizes of only three animals.  No blinding protocol was used, and the paper was not a pre-registered study.

Studies like this are generally not good evidence unless a very stringent blinding protocol is used, and studies like this almost invariably fail to follow any kind of blinding protocol. It's easy to find the failure: just search for the word "blind" or "blinding" in the text of the paper, and note well when it fails to occur. 

In general, there is nothing scientific about using nicknames such as "place cells" to describe cells. The justification given for the use of such a term is based not on observations of permanent features of any cells, but on subjective judgments of how the cells behaved at particular moments. That's as unscientific and subjective as saying that certain people have "fear eyes" or "sorrow eyes," based on subjective judgments of how their eyes looked at particular moments.  

Although O'Keefe's "place cell" papers were not at all a robust demonstration of any important effect, the myth that "place cells" had been discovered started to spread around among neuroscience professors, aided by the use of a catchy memorable catchphrase: "place cells."  O'Keefe even got a Nobel Prize in 2014. The Nobel Prize committee is normally pretty good about awarding prizes only when an important discovery has been made for which there was very good evidence. Awarding O'Keefe a Nobel Prize for his unconvincing work on supposed "place cells" was a very bad flub of the normally trusty Nobel Prize committee. Even if certain cells are more active when rats are in certain positions (something we would always expect to observe from chance variations), that does nothing to show that there is anything like a map of spatial locations in the brains of rats or mice. 

This year something similar went on. A 2.5 million Euro prize was awarded to psychologist Christian Doeller for his work on what he calls "grid cells." Granting this award is a very bad error, because Doeller's work on so-called "grid cells" is just as weak and unconvincing  as O'Keefe's work on so-called "place cells."

On the page announcing this award, we read these claims:

"How do human thought processes and the brain work? The psychologist Christian Doeller has been exploring this question for decades. He is a leading memory researcher and his work has led to ground-breaking findings in the field of neuronal spatial cognition, which is the ability of human beings to orient themselves in a physical space, and to apprehend and navigate it. Doeller demonstrated that spatial contexts can also be recoded into abstract categories and that they therefore form the neuronal basis of thinking and decision-making. Among other things, Doeller developed imaging analysis methods that allowed him to detect, for the first time ever, signals in the human brain that correspond to the grid cells. These are cells that were originally found in rats and that provide the animals with a system of coordinates that enables them to determine their own position."

The middle sentence here (in boldface) makes no sense. If "spatial contexts can also be recoded into abstract categories," that does nothing to show a "neuronal basis of thinking and decision-making." The claim about "a system of coordinates" that allow animals to "determine their own position" is spurious and groundless, and does not match any well-designed and robust neuroscience research.  No one has ever discovered any coordinate or any number or any letter of the alphabet inside a brain by examining brain tissue or brain scans or EEG readings of brain waves. No one has any coherent and credible tale to tell of how any such thing as a "system of coordinates" could exist in any organism's brain. 

On the page announcing this award, we have no direct links to particular papers written by Doeller, and no mention of the titles of such papers. There is an "information system GEPRIS" link that takes us to a page from which you can access two papers co-authored by Doeller, which are discussed below:

  • There is a link to a project called "At first glance: How saccades drive communication between the visual system and the hippocampus during memory formation." But we have no link to a paper, and Google Scholar makes no mention of such a paper. 
  • There is a link to a project called "Episodic integration under stress," and there is a link for a paper, one entitled "Stress disrupts insight-driven mnemonic reconfiguration in the medial temporal lobe." That paper has a good study size of about 60 humans. But it does not say anything about grid cells. Nor does it do anything to establish a "neuronal basis of thinking and decision-making." The paper does nothing to show a "mnemonic reconfiguration in the medial temporal lobe." We have some psychology test involving stress and memory. The paper draws no clear link between mind states and brain states. 
I looked on Google Scholar for other papers that Doeller might have authored on so-called "grid cells." I find these papers:
  • "Evidence for grid cells in a human memory network." This 2010 paper co-authored by Doeller includes a poorly-designed experiments with rats. one using a way-too-small study group size of only 8 rats. The term "grid cell" is used without any justification, and without any proper definition of what such a term means. The closest the paper comes to defining the term "grid cell" is when it says, "Grid cells recorded in medial entorhinal cortex of freely moving rodents fire whenever the animal traverses the vertices of an equilateral triangular grid covering the environment (see Fig. 1a), and may provide a neural substrate for path integration." But neurons in brains fire between 1 and 100 times per second. So it makes no sense to define a grid cell as a cell firing whenever some point or line is traversed. All neurons in the brain of rats and humans are continually firing. After some statistical gobbledygook that smells like "keep torturing the data" and "see whatever you are hoping to see" pareidolia, the paper claims, "Our results provide the first evidence that human entorhinal cortex encodes virtual movement direction with 6-fold symmetry, consistent with a coherently-oriented population of grid cells similar to those found in rat entorhinal cortex and pre- and parasubiculum." We should always be suspicious when neuroscientists claim to have found "the first evidence" for something. That amounts to a confession that at the time their paper was written, no other evidence for the claimed effect existed.  Figure 3 gives an example of how unconvincing the paper's evidence is. We have some fMRI brain scan data purporting to show changes in brain activity. But if you take a close look at the scale, you will see that the differences are only small fractions of a half of one percent.  The differences being graphed are about 1 part in 400. These are negligible differences that are not convincing evidence for anything. Anyone "noise mining" a brain scan and free to search any of 1000 little areas looking for such differences would be able to find such differences, purely because of random fluctuations.
  •  "Grid-cell representations in mental simulation." This 2016 paper makes  this untrue claim: "Electrophysiological recordings in freely moving rodents have demonstrated that positional information during navigation is represented by place cells in the hippocampus (O’Keefe and Dostrovsky, 1971) and grid cells in entorhinal cortex (Hafting et al., 2005)." The claims are untrue; no such things were demonstrated. O'Keefe's work on so-called place cells is not convincing because of the poor experimental practices I discuss above. The Hafting paper was guilty of Questionable Research Practices such as the use of fewer than 15 subjects per study group, the lack of a control group, and the lack of a blinding protocol. The authors of the paper "Grid-cell representations in mental simulation" then discuss some experiment they did while having a small number of people doing some imagination task while they had their brains scanned. The results are not any convincing evidence for either so-called grid cells or for brains being involved in imagination. The study group size is not that bad (24 subjects). But nothing in the paper does anything to give any convincing evidence for "grid-like representations" in the brain. We have no pre-registration of a hypothesis to be tested and an experimental protocol to be followed, and no blinding protocol. 
  • "From Cells to Systems: Grids and Boundaries in Spatial Memory." This is a review article by Doeller, one containing many unfounded claims about research done by him and others. The paper claims, "The background firing rate of place cells is very low, effectively zero." But contrary to such a claim, very many sources tell us that all neurons in the brain continuously fire, at a rate between about 1 and 100 times per second. 
  • "Hexadirectional Modulation of High-Frequency Electrophysiological Activity in the Human Anterior Medial Temporal Lobe Maps Visual Space." This 2018 paper lacking any blinding protocol claims, "Our findings provide first evidence for a grid-like MEG signal, indicating that the human entorhinal cortex codes visual space in a grid-like manner." Again, I must emphasize that we should typically have little confidence in neuroscience researchers claiming to have provided the first evidence for some claim, as such a confession about "first evidence" is typically an admission that the claim is not well-replicated. The term "grid-like" is so flexible that almost anyone looking for something "grid-like" in a large body of data can find it somewhere. No convincing evidence is presented here that brains are representing visual space. It's just more pareidolia in which neuroscientists eagerly seeking grids claimed to have found something "grid-like." Similarly, give eager cloud analysts thousands of photos of clouds, and if the analysts are eager to find "grid-like" patterns, they will surely be able to find them somewhere. 
Reviewing Doeller's work, I fail to find anywhere any convincing evidence for anything like representations in the brain of visual space or anything non-genetic, or anything like convincing evidence that justify the use of the terms "place cells" or "grid cells." Failing to use pre-registration, his papers typically seem to involve someone being free to analyze brain data in endless number of ways, with the analyst then announcing that after using such-and-such an elaborate gobbledygook rigmarole scheme of convoluted analysis, something "grid-like" was supposedly found. Typically there is a stink suggesting noise-mining, pareidolia, and "keep torturing the data until it confesses." None of the gathered brain data does anything to naturally suggest any such thing as neural representations of the subject's position or orientation or what the subject is seeing. But when neuroscientists are free to slice, dice and massage data in endless possible ways, they may get data to produce the faintest whiff of a suggestion of something, whenever neuroscientists are eager to conjure up such a suggestion. 

keep torturing the data until it confesses

Things such as neuron firing rates (picked up by EEG devices) or tiny variations in blood flow rates in brains (picked up by fMRI scanners) are randomly fluctuating data. Anyone eagerly analyzing some large body of randomly fluctuating data hoping to find some desired correlation or pattern will always be able to find little bits of "superior activity" or "increased activation" here or there, about as good as the weak evidence Doeller gives for his so-called "grid cells." But that's not decent evidence of something being represented in the brain. Similarly, someone eagerly analyzing thousands of pictures of clouds in the sky and hoping to find something that looks like the ghost of an animal will be able to find now and then some shape that looks a little like an animal. But that's no evidence for any real representation of animals shapes in the sky. 

The term "representation" is enormously misused and abused by neuroscientists, who abundantly use the term in an imprecise way, without any adequate warrant.  Assumptions underlying Doeller's papers are implausible. If there were to be increased firing in some cells when some subject was in some area of a grid (as picked up by an EEG), that would not actually be a representation of the subject's surroundings. And if there were to be a tiny bit more brain activity in some tiny fraction of the brain when some subject was in some area of a grid (as picked up an fMRI machine), that would not actually be a representation of the subject's surroundings.

The fact that Doeller has got some 2.5 million Euro prize for his weak research is a commentary on how little progress is being made in trying to substantiate "brains make minds" claims and "brains store memories" claims. When a top prize goes to research this weak, it shows you how badly neuroscientists are failing in their attempts to substantiate their untenable dogmas about brains, already discredited by so many facts discussed in the posts of this blog. 

To get an example of some robust grid-related science, you can look at the periodic table shown below:

Credit: National Institute of Standards and Technology (link)

There's no pareidolia going on in the periodic table shown above, no "see what you were hoping to see" noise mining like in papers about so-called grid cells. Nature really does have the number of elements listed in this table. Each of the listed types of elements really does have a number of protons exactly equal to the number shown in the top left corner of the square representing the element. 
Each of the listed types of elements really does have an average weight equal to the atomic weight shown under the element's name in the square representing the element. 

We may contrast this rock-solid good-as-gold example of robust science with the socially constructed will-of-the-wisp legend-mongering pareidolia dross of "place cell" papers and "grid cell" papers producing no robust evidence of representations in any brain cells, papers that engage in unjustified cell-nicknaming that is not at all robust observational science. 

Below is a relevant quote from a scientist:
  • "Neuroscience, as it is practiced today, is a pseudoscience, largely because it relies on post hoc correlation-fishing....As previously detailed, practitioners simply record some neural activity within a particular time frame; describe some events going on in the lab during the same time frame; then fish around for correlations between the events and the 'data' collected. Correlations, of course, will always be found. Even if, instead of neural recordings and 'stimuli' or 'tasks' we simply used two sets of random numbers, we would find correlations, simply due to chance. What’s more, the bigger the dataset, the more chance correlations we’ll turn out (Calude & Longo (2016)). So this type of exercise will always yield 'results;' and since all we’re called on to do is count and correlate, there’s no way we can fail. Maybe some of our correlations are 'true,' i.e. represent reliable associations; but we have no way of knowing; and in the case of complex systems, it’s extremely unlikely. It’s akin to flipping a coin a number of times, recording the results, and making fancy algorithms linking e.g. the third throw with the sixth, and hundredth, or describing some involved pattern between odd and even throws, etc. The possible constructs, or 'models' we could concoct are endless. But if you repeat the flips, your results will certainly be different, and your algorithms invalid...As Konrad Kording has admitted, practitioners get around the non-replication problem simply by avoiding doing replications.” -- A vision scientist (link). 
The paper here ("Investigating the concept of representation in the neural and psychological sciences") says this: "Neuroscientists and psychologists do not appear to have a precise idea about what kind of brain structure or pattern counts as representation." 

There are no components in the brain that have a physical structure resembling a grid. No electron microscope photos of anything in a brain show anything looking like a grid. This is one of the reasons why it is misleading to be using the term "grid cells" to refer to claimed grid resemblances that only arise after convoluted dubious data analysis has been done by scientists. 

Saturday, March 28, 2026

2 Out of 3 Methods Find No Correlation Between Alzheimer's and Hippocampus Volume

 Scientists have no coherent story to tell of how a brain could store a memory or instantly retrieve a memory or maintain a memory in a brain that replaces its proteins at a rate of about 3% per day (an effect that should prevent you from remembering anything for more than two months, if your brain stored your memories).  The phrases they mutter about such things make no sense. Asked to explain a mechanism of neural memory storage,  a neuroscientist may mutter some phrase such as "synapse strengthening," which makes no sense as an explanation of memory formation.  Strengthening is not information storage.  And the idea that you stored memories in synapses because they were strengthened makes no more sense than the idea that you store memories in your arm biceps when they strengthen.  There is no robust evidence that synapses strengthen more when you learn something. 

Having no real evidence on a cellular level to support their claims of neural memory storage, neuroscientists sometimes resort to claims about parts of the brain, claiming that some-such part of the brain is needed for memory.  Such claims usually involve claims that the hippocampus is crucial for memory. The experimental evidence has never supported such claims.  People with no hippocampus or a damaged hippocampus usually perform fairly well on tests of memory. 

In my widely-read long post here (entitled "Studies Debunk Hippocampus Memory Myths")  I discuss very many scientific papers that discredit the claim that the hippocampus has some big relation to memory. That post is very thorough in reviewing the relevant literature, but I recently noticed that is has one omission: it fails to discuss what relation (if any) there is between the volume of the hippocampus and a person's tendency to have the severe cognitive disorder known as Alzheimer's disease.  Let us look at that very issue, using a 2023 paper entitled "MRI measurements of brain hippocampus volume in relation to mild cognitive impairment and Alzheimer disease A systematic review and meta-analysis" which you can read here

The paper does a meta-analysis of different studies comparing hippocampus volume, Alzheimer's disease and mild cognitive impairment or MCI. The paper comes to different conclusions, based on the different methods used. 

I can explain the different methods:

(1) One method involves a simple method that is only checking the "raw volume" of the hippocampus. This is the easiest and simplest method. 

(2) Another more sophisticated method factors in the "total intercranial volume" of subjects in addition to the hippocampus volume. This is called the "hippocampus volume measured by MRI TIV Correction" method. An AI Overview states, "Total Intracranial Volume (TIV) correction in MRI is a critical preprocessing step in neuroimaging that normalizes brain structural volumes (such as gray matter, white matter, or hippocampus) to account for variations in individual head size."

(3) Another method is called the "hippocampus measured by MRI ICA Correction" method. An AI Overview states, "Independent Component Analysis (ICA) in MRI is a data-driven, blind source separation technique used to isolate and remove noise sources (artifacts) from imaging data to enhance signal quality. It is highly effective at separating artifacts like motion, respiration, and large vessel signals from functional (fMRI) or dynamic susceptibility contrast (DSC-MRI) data, improving diagnostic accuracy." People are supposed to remain totally motionless when they undergoing an MRI scan, but they often fail to be totally motionless; and this can affect the quality of the MRI. This "MRI ICA Correction" helps fix that data quality problem. 

Here are the results for the left hippocampus when these three methods were used. 

Method 1: "The left hippocampus volume measured by MRI Raw volume was negatively correlated with MCI and AD (OR = 0.58, 95%CI: 0.42, 0.75)." So when the simplest and least sophisticated method was used, a correlation was reported between hippocampus volume and mild cognitive impairment (MCI) and Alzheimer's Disease (AD). 

Method 2: "Results of meta-analysis showed no correlation between left hippocampus volume measured by MRI TIV Correction and MCI and AD (OR = 0.90, 95%CI: 0.62, 1.19), as shown in Figure 4." So when the first of the two more sophisticated methods was used, there was  no correlation found between left hippocampus volume and mild cognitive impairment (MCI);  and there was no correlation found between left hippocampus volume and Alzheimer's Disease (AD). 

Method 3:  "Results of meta-analysis showed that the volume of the left hippocampus measured by MRI ICA Correction was not correlated with MCI and AD (OR = 0.92, 95th CI: 0.75, 1.09), as shown in Figure 5." So when the second of the two more sophisticated methods was used, there was  no correlation found between left hippocampus volume and mild cognitive impairment (MCI); and there was no correlation found between left hippocampus volume and Alzheimer's Disease (AD). 

The bottom line here is that two out of the three methods did not find any correlation between left hippocampus volume and mild cognitive impairment (MCI), and did not find any correlation between left hippocampus volume and Alzheimer's Disease.

Here are the results for the right hippocampus when these three methods were used. 

Method 1: "Results of meta-analysis showed that the right hippocampus volume measured by MRI Raw volume method was not correlated with MCI and AD (OR = 0.87, 95%CI: 0.56, 1.18), as shown in Figure 7." So when the simplest and least sophisticated method was used, no correlation was reported between right hippocampus volume and mild cognitive impairment (MCI); and there was no correlation found between right hippocampus volume and Alzheimer's Disease (AD). 

Method 2: "Results of meta-analysis showed no correlation between the right  hippocampus volume measured by MRI TIV Correction and  MCI and AD (OR = 0.81, 95%CI: 0.49, 1.12), as shown in Figure 8." So when the first of the two more sophisticated methods was used, there was  no correlation found between right  hippocampus volume and mild cognitive impairment (MCI);  and there was no correlation found between right hippocampus volume and Alzheimer's Disease (AD). 

Method 3:  "Results of meta-analysis showed that the volume of the right hippocampus measured by MRI ICA Correction was negatively correlated with MCI and AD (OR = 0.49, 95%CI: 0.35, 0.62), as shown in Figure 9." So when the second of the two more sophisticated methods was used, there was  a correlation found between right hippocampus volume and mild cognitive impairment (MCI) and Alzheimer's Disease (AD). 

The bottom line here is that two out of the three methods did not find any correlation between right hippocampus volume and mild cognitive impairment (MCI), and did not find any correlation between right hippocampus volume and Alzheimer's Disease.

Overall, the results here are quite consistent with my claims that memory is not a brain function, and that memories are not stored in brains. 

Tuesday, March 24, 2026

Imaginatively Misidentifying Scientists Are Like Someone Seeing a Cloud and Calling It a Spaceship

 Scientists may make the goofiest misidentifications, and whenever it sounds like some grand achievement, gullible science journalists fall for it "hook, line and sinker," spreading the tall tale far and wide. 

We had an example of this recently in the science news. Here is how a paper was handled on the Science News page of Google News. 


The headlines were all false. Nothing at all like a "nearly invisible dark galaxy" had been found, and no evidence of any dark matter had been produced. I may note that the Science News page of Google News now has a "group headline" feature that cannot be trusted, and which tends to parrot false clickbait claims made in the group of "science news" stories it is presenting in one spot of its page. 

A look at the scientific paper shows what is going on. Some scientists saw some globular clusters about 300 million light-years away. Globular clusters are dense concentrations of stars, and they make some of the most beautiful sights in deep space. Below we see an example of a globular cluster. 

Credit: ESA/Hubble& NASA

Now, normally globular clusters are seen kind of at the edges of galaxies, much larger groups of stars, rather than existing far away from any galaxy. The diagram below shows the typical distribution of globular clusters, shown as circles in the diagram. 


 The astronomers report seeing four globular clusters existing without much of any nearby regular galaxy consisting mainly of evenly distributed stars. The astronomers have wrongly claimed that this is evidence that these globular clusters are embedded within a galaxy that is almost entirely invisible dark matter. But these globular clusters do not provide evidence of any such thing. At best, they  merely give evidence suggesting globular clusters can form outside of a galaxy, or within a galaxy that is mostly faint visible gas, looking different from an ordinary galaxy. 

What is going on is imaginative misidentification. Seeing merely four rather funny-looking visible globular clusters (things vastly smaller than galaxies), the scientists have claimed that what they see is a surrounding dark matter galaxy. Their observations provide no warrant for such a claim. They have not actually seen any dark matter at all. All they saw was some visible globular clusters and some gas. 

The world of dark matter cosmology is a world of imaginative misidentification. Cosmologists are often claiming that they saw invisible dark matter, when all they saw was visible matter. In their papers or press accounts of their papers, we frequently get misleading  language, such as "scientists saw dark matter" or "we saw dark matter." Instead of using language what we should be reading is language such as "scientists inferred dark matter" or "we inferred dark matter."

Dark matter has no place in the Standard Model of Physics. No one has ever seen dark matter, which cosmologists claim is invisible. But the theory of dark matter is a cherished tenet of the small group of scientists called cosmologists, consisting of no more than a few thousand scientists around the world. In such a small group of scientists, belief traditions can arise, and stubbornly persist over many decades or centuries, despite a lack of observational warrant for such beliefs. 

And so it is in the world of cognitive neuroscientists. The largest society of cognitive neuroscientist is the Cognitive Neuroscience Society, and it has only about 2000 members worldwide. The members of such a society make a cloistered little clique in which groupthink predominates. 

Experts tend to exist in "echo chambers" where groupthink and herd effects may predominate. Often involving way-too-narrow and way-too-specialized fields of study (sometimes called silos), such echo chambers can be found in the ivory towers of academia or the ideological enclaves that are   monasteries or seminaries (schools that train people to be clergy). Within such an echo chamber people will tend to hear only people who belong to the same belief community, people who share the same ideology. Existing in such an ideological enclave, absurd or immoral or unwarranted opinions may be voiced, and may be regarded as great wisdom by anyone who looks around and sees other members of the belief community nodding in agreement. 

groupthink in expert communities

In the world of cognitive neuroscientists, we see abundantly cases of imaginative misidentification similar to that occurring in cosmology. Some examples are:
  • Neuroscientists who look at ordinary brain cells having no special characteristics, and claim without warrant that they are "engram cells" storing a memory. 
  • Neuroscientists who claim they see "representations" or "encoding" when they looked at something in the brain showing no actual sign of any representation of learned knowledge or learn experience, and no sign at all of any real encoding going on. 
  • Neuroscientists who look at ordinary cells having no special characteristics, and who announce that they are seeing "place cells," based on some claim of "superior activation" that is typically without good warrant, being based on low-quality studies using way-too-few subjects for any reliable result to be claimed.  
  • Neuroscientists who look at ordinary connections in the brains having no special characteristics (ordinary axons and synapses), and then announce without warrant that they are seeing "circuits for..." this or that cognitive experience or capability. 
  • Neuroscientists who look at some chemistry readings failing to provide any warrant for claims relating  to cognition, who announce that they are seeing "the chemistry of" some grand thing such as romantic love or spirituality or ambition. 
The MIT Press Reader has a very good recent article on poor practice among research scientists, one entitled "How 'Tiny Shortcuts'  Are Poisoning Science." You can read it here

Friday, March 20, 2026

Study Fails to Show a Brain Basis for Attention Deficit Disorder

A paper lamenting the lack of neuroscience progress is the paper "Why hasn't neuroscience delivered for psychiatry?" by David Kingdon, a professor of psychiatry. After noting some progress in medicine, Kingdon states the following:

"The major mental illnesses psychosis, bipolar disorder, anxiety disorders, anorexia nervosa and depression have proved remarkably resistant to similar developments. Unfortunately, it is still not possible to cite a single neuroscience or genetic finding that has been of use to the practicing psychiatrist in managing these illnesses despite attempts to suggest the contrary."

After noting the lavish funding that neuroscientists have long received in attempts to find a brain cause for mental illnesses, Kingdon states this: 

"Why do we not have evidence of biological malfunctioning for severe mental disorders? Mental disorder can be caused by biological insults such as frontal lobe damage, dementia and delirium, but biological changes have yet to be shown to be relevant to the major mental disorders." 

Talking about changes in the brain, Kingdon states this: "No such clear causative changes exist in severe mental illnesses such as depression, anxiety, bipolar disorder and schizophrenia." After noting "25 years of research frustration," Kingdon quotes a neuroscientist who advocates that we keep at this not-getting-much-of-anywhere research approach. Kingdon then states this:

"But does this not seem, after more than 30 years of failure, more akin to a religious or, albeit culturally influenced, persistent strong belief than one based on scientific grounds? Just where is the rational justification for ploughing the same furrow again and again?"

Kingdon then ends by stating this: "The time has come to challenge the justification for such relatively high levels of investment of time, expertise and resource in neuroscience for mental disorders."

I can give an answer to the question posed by Kingdon's paper, the question of, "Why hasn't neuroscience delivered for psychiatry?" The answer is that the main claims of neuroscientists about brains and minds are incorrect. Our minds are not produced by our brains as neuroscientists claim. So looking for neural causes of the main mental illnesses is an approach likely to fail.  Once experts realize that mind is a fundamentally spiritual, psychic or metaphysical reality, they may start pursuing spiritual, social, psychological and psychic approaches to mental health treatment, approaches that may do far more for helping mental illness than neuroscientists have ever done. 

Recently we had a paper showing the latest failure of scientists to show a neural basis for a commonly diagnosed mental condition. The condition studied was ADHD, which stands for Attention Deficit Hyperactivity Disorder. Children diagnosed with this condition may be observed as paying less attention in school, and may be so active and full of energy that they find it difficult to stay sitting at a desk for school lessons. 

The paper (which you can read here) is entitled "Brain morphological changes across behaviour spectrums in attention deficit/hyperactivity disorder." The paper analyzed brain scans of 135 children and adolescents diagnosed with ADHD (Attention Deficit Hyperactivity Disorder) and also 182 "neurotypical controls." 

Looking for differences in gray matter volumes (GMV) between the subjects with ADHD and the normal subjects, the study failed to find any difference. We read this: "Voxel-wise comparisons of GMV [gray matter volumer] between participants with ADHD and NCs [normal controls] revealed no significant differences, which contrasts with current understanding of the pathophysiological mechanisms underlying ADHD." In other words, the subjects diagnosed with attention deficit disorder did not have smaller amounts of gray matter in their brains. And their brains were not smaller. 

The authors then resorted to the old "keep torturing the data until it confesses" trick that can be called "subgroup mining." This technique works like this: when a scientist fails to find a difference in the overall group of subjects, the scientist may look for some fraction of the group in which there is a difference.  So, for example, if you are analyzing 100 American subjects and 100 Mexican subjects looking for a difference in intelligence, and you find no difference in the groups as a whole, you might then try to create a kind of "aroma of a difference" by reporting on a difference between one subgroup of American subjects that were smarter, and one subgroup of Mexican subjects that were less intelligent.  This tactic of "subgroup mining" is in general misleading. It creates impressions of differences that are unwarranted impressions, because they do not correspond to the data found in the entire pool of subjects. 

We read of the most convoluted statistical shenanigans going on to try to create subgroups by some insanely byzantine mathematical contortions. The excerpt below gives only a small fraction of the gobbledygook describing the "keep torturing the data until it confesses" nonsense that was going on:

"HYDRA, an advanced semisupervised learning algorithm, was applied to identify the ADHD neuroanatomical subtypes with brain regional volumes as features (online supplemental figure S1a).24 HYDRA involved the following steps: first, participants with ADHD were labelled as positive and NCs as negative. Second, a convex polyhedron with K planes (equal to the number of clusters) was constructed to separate ADHD from NCs. Then, an extended standard linear maximum-edge classifier was used to calculate the distance between each participant with ADHD and each hyperplane. Finally, each participant with ADHD was assigned to the nearest hyperplane, resulting in K clusters (ADHD subtypes). In the clustering process, we used a 10-fold crossvalidation strategy. In each cross-validation step, ninefold data were used for clustering. After 10-fold crossvalidation, each participant with ADHD had nine clustering labels. A total of 20 clustering consensus steps were then performed to determine the final clustering label for each participant with ADHD using a co-occurrence matrix generated from the nine labels. The number of clusters (K) was set from 2 to 10, and the optimal cluster number was determined using the Adjusted Rand Index (ARI).25 To test the effect of feature selection on subtyping, the following steps were performed for the 116 features: first, we used Levene’s test to examine the homogeneity of variance for each feature between the NC and ADHD groups, resulting in the exclusion of 20 features. Next, we employed the intraclass correlation coefficient (ICC) to assess the consistency of the remaining features across sites with an ICC threshold of 0.15, excluding 68 features. Finally, we removed features with an absolute correlation coefficient with the ADHD index <0.15, leaving 11 features." 

The statistical funny business going on was actually vastly more complicated than what is mentioned above.  After this methodological madness that cannot be justified by any straightforward and reasonable explanation, the authors ended up with a "subtype 1" which "showed increased GMV [gray matter volume] mainly in the frontal, parietal and temporal regions." There was also a "subtype 2" which either "showed no significant GMV [gray matter volume] differences compared with NC [normal controls]" or  "exhibited GM [gray matter] reductions mainly in the bilateral cerebellum, insula, limbic systems, frontal, temporal and occipital regions." 

So "it's a wash." One little fraction of the subjects with ADHD had more gray matter in their brains, and some other little fraction of the subjects with ADHD had less gray matter in their brains. It's what we might expect if gray matter differences are not the cause of ADHD. 

It should be noted that the differences supposedly existing in this subgroup with a smaller gray matter volume was minimal, because we read of a statistical significance that is merely "p<0.05," which is the smallest difference that qualifies as "statistical significance" under current publication traditions. Whenever so low a level of statistical significance is reported, it is an example of what is criticized as "p-hacking," the reporting of a statistical significance so low that there is every reason to doubt any substantial difference exists. Many scientists say that the tradition of reporting these " "p<0.05" results as being "statistically significant" is a bad tradition of modern science, and that a more stringent criteria should exist, in which only results as good as  "p<0.01" or "p<.001" are reported as being "statistically significant."

The Google Gemini infographic visual below explains the concept of p-hacking in scientific experiments. 

p-hacking


Do not be fooled by the statistical "funny business" p-hacking shenanigans that the authors of this paper engaged in. Their data is entirely consistent with the conclusion that there is no difference between the brains of those with ADHD (Attention Deficit Hyperactivity Disorder) and the brains of normal subjects. With this paper we should ignore all of the "keep torturing the data until it confesses" p-hacking nonsense that was going on, and focus instead on a single sentence of the paper, the sentence declaring, "Voxel-wise comparisons of GMV [gray matter volume] between participants with ADHD and NCs [normal controls] revealed no significant differences, which contrasts with current understanding of the pathophysiological mechanisms underlying ADHD."  In other words, there is no evidence that people with ADHD (Attention Deficit Hyperactivity Disorder) have different brains or fewer neurons than those without such a disorder.  This result is consistent with my claim that the brain is not the source of the human mind. 

Monday, March 16, 2026

Scientists on Book Tours May Mislead Us

 When writing up some scientific paper that may involve either experimental research or theorizing, a scientist often may think to himself questions such as "Is there some way to frame this work so that it sounds like important work worthy of publication?" or "Is there some way to spin this work so that it sounds like important work worthy of being cited, so that my citation count will increase?" But a scientist willing to "think big" may be more ambitious, and ask himself: "Is there some way to spin this work so that it sounds like something I could parlay into a book deal?" 

There are two main types of scientists who try to parlay their work into book deals: the "grand theory" scientist and the "lab legend" experimental research scientist.  The "grand theory" scientist is often someone thinking: how can I make my theory into a best-selling book? 

scientist interested in book deal

The "grand theory" scientist trying to become a best-selling author is someone we cannot trust to accurately summarize facts and accurately report on reality, because he is so very badly biased. Such a scientist tends to report everything in a way that tries to make his grand theory seem credible. 

An example of such a scientist was Charles Darwin, who did a bad job on reporting on biological reality in his books, because he was trying to make his theory of accidental biological origins seem more credible. In the title of his main work, Darwin used the misleading phrase "natural selection," which does not correspond to anything Darwin described to try to explain the origin of species.  Selection means an act of choice, and blind unconscious nature does not select things.  Describing a biological world in which we never observe random variations in an organism that are even a tenth of a new complex biological innovation, Darwin tried to make it sound like such events were happening all the time.  When it came to describing the difference between humans and apes, Darwin resorted to the most appalling deceit, making on page 99 of The Decent of Man the obviously untrue claim that "there is no fundamental difference between man and the higher mammals in their mental faculties." 

Another example of a "grand theory" scientist making very misleading statements was that of a certain professor trying to drum up sales of his book claiming that our solar system has been visited by an extraterrestrial spaceship. The professor did research trying to dredge up evidence for his claim that a spaceship from another solar system had burnt up in the atmosphere.  The research found nothing but the most ordinary specks of metal, but the professor tried to portray these as evidence of something of epic significance: remnants of a spaceship from another solar system. 

Another type of scientist trying to sell a book of his is what we may call the "lab legend" scientist. This type of scientist tries to capitalize on lab research he has done, by playing up some legend that his lab work was some epic breakthrough very worthy of attention. Typically the research very much fails to live up to the legend. 

scientist book tour

To promote books they write, scientists go on book tours, in which they  engage in a series of interviews designed to promote the book. The statements scientists make during such book tours are some of the most unreliable utterances of scientists. Lining up the series of interviews for a book tour is typically the job of the publicity department of a publisher.  Typically a scientist going on a book tour will be prepped by someone working for the publisher, who will emphasize the importance of the scientist producing exciting-sounding quotes. So if the scientist has produced some boring work that does not prove anything interesting, he may be told that he should "spice things up" by making it sound like his research is some "epic breakthrough." 

When scientists write books promoting self-glorifying "lab legends,"  key factors are the dust jacket and the back cover. The left part of the paper dusk jacket customarily has text written not by the author himself, but by some employee of the publishing company. Here the PR hacks of a publishing company often engage in very bad deceit, making all kinds of claims about authors not warranted by any facts.  In either paperback or hardcover, the back cover of a book by a scientist may also have boasting claims written by the publicity department of a publisher; and such claims are often "try to make a mountain of a mole-hill" types of claims. Favorable quotations by early readers of the book (called "blurbs" in the industry) are often written by associates or friends of a scientist, with there occurring lots of "quid pro quo" expectations, along the lines of "you rub my back and I'll rub yours." So if Professor X gives a favorable quotation about a book by Professor Y, it may be expected that Professor Y will give an equally favorable "blurb" when Professor X sends Professor Y an early copy of his book. 

science book back cover


Not very long ago we had an example of one of the "lab legend" scientists making very misleading statements while on a book tour. In interviews of his book tour, we heard the unfounded triumphal legend that the scientist did something to physically manipulate memories. In one of these interview articles making such a claim, we have a link back to one of the papers of the scientist. It is a very bad piece of junk science guilty of very bad examples of Questionable Research Practices, such as the use of way-too-small study group sizes and the use of an utterly unreliable method of trying to judge fear or recall in rodents (the "freezing behavior" method discussed here). The scientist hawking his book never did anything to show that memories are stored in the brains of rodents, and never did anything to mechanistically modify a memory, contrary to his groundless boasts; and no scientists ever did any such thing. We have the peddling of explanatory snake-oil to gin up book sales. 

We should shake our heads in dismay while pondering the willingness of mainstream sites to fall "hook, line and sinker" when such lab legends are pushed by trying-to-glorify-themselves scientists on book tours.

Thursday, March 12, 2026

The Groundless Myth of Superagers Growing More Brain Cells

 Never forget there's a "profess" in the word "professor."  And "profess" is defined as "to affirm one's faith in or allegiance to (a religion or set of beliefs)."  A recent press release from the rather strangely named University of Illinois Chicago gives us an example of a neuroscientist professor professing, and incorrectly boasting about grand things that were not at all done. 

We have a headline of "What makes superagers’ brains special?"  So-called superagers are very old people with very good memories. The press release attempts to suggest the story line that these superagers can remember better because they had higher rates of neuron creation (neurogenesis).  It's a story line that fits in with the idea that brains store memories. But it's not a story line backed up by any good evidence. 

Neurons are created before humans reach adulthood. But there is no robust evidence that significant number of neurons are created in adulthood. 

In the UIC press release we have this extremely glaring example of a scientist crowing about some grand and glorious result, when the research is actually very low-quality research because of its use of way-too-small study group sizes.  The press release says this:

" 'This is a big step forward in understanding how the human brain processes cognition, forms memories and ages. Determining why some brains age more healthily than others can help researchers make therapeutics for healthy aging, cognitive resilience and the prevention of Alzheimer’s disease and related dementia,' said Orly Lazarov, a professor in UIC’s College of Medicine and director of the Alzheimer’s Disease and Related Dementia Training Program."

But the truth is that we have here neither a  "big step forward" nor an example of a decently done scientific study. The reason is the tiny study group sizes, which were only 8 subjects per study group. 

When we look at the scientific paper "Human hippocampal neurogenesis in adulthood, ageing and Alzheimer’s disease," and search for the study group size (using the search phrase "n=") we find that the study group sizes were only 8 subjects per study group. No study like this should be taken seriously unless it used a study group size of at least 15 or 20 subjects per study group. 

neuroscience sample sizes too small

There is simply no basis for concluding that super-agers have neuron creation rates any different from old people with poor memories, nor is there any good basis for thinking that adults create new neurons in significant numbers. No such generalization can be made with any reliability when you do a way-too-small study group size of only 8 people. 

Did the UIC press release tell us that only 8 subjects per study group were used? No, it conveniently forgot to mention how many subjects were used. 

We should note that in the text of the paper Lazarov sings a tune very different from her boasts in the press release. In the paper we read this (with "power" referring to statistical power):

"Notably, we observed a general increase in the number of immature neurons in SA; however, inter-sample variability and low sample number compromised the power of our analysis. It should be noted that the high level of variability from sample-to-sample in cell-type abundance limited the quantitative power of our study. Future experiments with a greater number of human brain samples will be needed to study this aspect in depth."

Wow, that sounds like kind of a confession of failing to do the work in a way that would inspire confidence. So why on Earth was Lazarov boasting in the press release that this study was "a big step forward in understanding how the human brain processes cognition, forms memories and ages"?  

Let us now look at some neuroscientists who have denied the doctrine of adult human neurogenesis, by denying that human adults  create new brain cells. 

  • A 2018 paper states, "Our recent observations suggest that newborn neurons in the adult human hippocampus (HP) are absent or very rare (Sorrells et al., 2018)." The paper notes that "studies supporting the presence of adult human hippocampal neurogenesis are not consistent with each other: some report a sharp decline and small, negligible contribution in adults...others support continuous high levels of neurogenesis in old age (Spalding et al., 2013; Boldrini et al., 2018), but show extremely high variability."
  • A 2018 paper states "2 independent papers coming from different parts of the world have used a similar approach and methodology leading to converging results and the following similar conclusions: hippocampal neurogenesis in humans decays exponentially during childhood and is absent or negligible in the adult." It says these papers "are Sorrells et al. (2018) from the lab of Alvarez-Buylla in USA published in March in Nature, and the study by Cipriani and coworkers from the Adle-Biassette’s lab in France published in this issue of Cerebral Cortex (2018; 27: 000–000)."
  • A 2018 article in The American Scientist (co-authored by Sorrells and Alvarez-Buylla) is entitled "No Evidence for New Adult Neurons."  It said, "Adult human brains don’t grow new neurons in the hippocampus, contrary to the prevailing view." The authors criticize previous reports of adult neurogenesis partially by saying they "frequently used only a single protein to identify new neurons," which was a faulty technique because "we found that the protein most often used, one called doublecortin, can also be seen in nonneuronal brain cells (called glia) that are known to regenerate throughout life." A 2022 article entitled "Doublecortin and the death of a dogma" refers to work by Franjic, saying, "out of the 139,187 nuclei sequenced, only 2 showed appropriate transcriptomes for neural precursor cells... suggesting adult human neurogenesis is rare, if it occurs at all."
  • A 2019 paper says that "a balanced review of the literature and evaluation of the data indicate that adult neurogenesis in human brain is improbable," and that "several high quality recent studies in adult human brain, unlike in adult brains of other species, neurogenesis was not detectable."
  • A 2018 paper claimed that "New neurons continue to be generated in the subgranular zone of the dentate gyrus of the adult mammalian hippocampus," but the paper's title was "Human hippocampal neurogenesis drops sharply in children to undetectable levels in adults."
  • Describing his research, the neuroscientist  Ashutosh Kumar stated in 2020 that he had found this: "Progression of neurogenesis is restricted after childhood, and reduces to negligible levels around adolescence and onwards."
  • A 2022 paper was entitled "Mounting evidence suggests human adult neurogenesis is unlikely."
  • A 2022 paper states, "In this review, we will assess critically the claim of significant adult neurogenesis in humans and show how current evidence strongly indicates that humans lack this trait." The paper states that "In summary, a thorough review of the literature shows that there is no scientific convincing evidence of the generation and incorporation of new neurons into the circuitry of the adult human brain, including the dentate gyrus of the hippocampus."  Noting how false claims persist within neuroscience, the paper states, "As Victor Hamburger, co-discoverer of the nerve growth factor, said at an informal meeting: 'A single report of an incorrect finding that many people like, takes more than hundreds of papers with negative findings to make an acceptable correction.' ” 
  • A 2021 paper was entitled "Positive Controls in Adults and Children Support That Very Few, If Any, New Neurons Are Born in the Adult Human Hippocampus."

There is another reason for disbelieving the "superagers make more neurons" story, besides the lack of evidence for adult neurogenesis. The additional reason is that when asked to explain memory creation, neuroscientists typically appeal not to neurons but to synapses existing outside of neurons.  The senseless story that neuroscientists keep telling is that memory creation occurs by "synapse strengthening," an idea that makes no sense for a variety of reasons, including the fact that information is never stored by an act of mere strengthening, and also the fact that synapses are made up of proteins with an average lifetime 1000 times shorter than the longest length of time that humans can remember things (50 years or more). Conceivably a neuroscientist might get a talking point in favor of such a theory of synaptic memory storage if he were to show that the synapses of superagers are stronger or more abundant than the synapses of typical agers. But you would not get a talking point to support such a theory of synaptic memory storage if you merely showed that superagers created more neurons than normal agers. 

Sunday, March 8, 2026

One Year Later, the Blog of the Cognitive Neuroscience Society Still Sounds Evidence-Poor

 On March 7, 2025 I posted on this blog a post entitled "Examining the Evidence-Poor Blog Archive of the Cognitive Neuroscience Society, 2020 to 2025," one you can read here

After reviewing all of the posts that sounded as if they might be relevant to "brains make minds" claims and "brains store memories" claims, I stated this:

"So examining all the posts on this [Cognitive Neuroscience Society] site from January 2020 to March 2025 that had headlines sounding like they might be some substantive evidence for 'brains make minds' and 'brains store memories' claims, I find no such substantive evidence. We have lots of cognitive neuroscientists claiming to know things they don't know. But a close look at their research always fails to find robust evidence in support of the dogmas that cognitive neuroscientists keep chanting."

It has now been one year since I wrote that post. Let us look at all the last year's posts on the blog of the Cognitive Neuroscience Society, to see whether anything in the past year should change our opinion about the blog being evidence-poor in regard to the main claims of cognitive neuroscientists. 

Here are all the posts the blog has published in the past year:

March 3, 2026: "From an Outsider to a Champion for the Cognitive Neuroscience of Emotion." We have an interview with a neuroscientist LeDoux who is the author of a book with the dumb title "The Emotional Brain." It is not brains that are emotional, but people who are emotional. We have this ridiculous statement by the neuroscientist: "Eventually, I would write the book The Emotional Brain [one of several books written by LeDoux], which I think helped put emotion on the map." It is an example of the kind of senseless boasting that neuroscientists frequently engage in. Obviously emotions were "on the map" long before such a book was written.  The next statement by LeDoux is equally ridiculous, with him saying , "If you take a look at the study of consciousness, one thing that's missing is emotion." No, scholars of the human mind have always made human emotions one of the main objects of their study. Nothing that LeDoux says does anything to support claims that brains make minds or that brains store memories. 

January 9, 2026: "Threading Together Attention Across Human Cognition."  We have an interview with neuroscientist Monica Rosenberg. She starts out by saying, "Brains are so commonplace in our lives that it’s easy to take them for granted, but when you stop to think about it, it’s absolutely remarkable that our minds emerge from electrified meat.” A more truthful sentence would be "it's absolutely unbelievable that our minds emerge from electrified meat." Rosenberg incorrectly states, "We have evidence that individual differences in features of brain activity, like functional connectome organization, can predict differences in behavior but we still don’t understand why." No such evidence exists, if by "differences in behavior" you mean the type of choices a person would make. Rosenberg makes unfounded boasts about things done by people she works with. She claims, "Ziwei Zhang, a PhD student in my lab, showed that dynamics of the same functional brain network predict when people are surprised as they do a learning task and watch basketball games." There's a link to the paper here, entitled "Brain network dynamics predict moments of surprise across contexts." It's not any good evidence for brains-making-minds. It is well known that particular type of emotions tend to produce distinctive muscle movements showing up as facial expressions; and particular types of muscle movements show up as distinctive blips in fMRI scans or EEG readings. 

muscle movements affect EEG readings

December 16, 2025: "Taking Action Seriously in the Brain: Revealing the Role of Cognition in Motor Skills." We have an interview with a scientist studying the role of the brain in motor skills. He talks about "motor working memory" without clearly explaining what he means by that term. The idea of "working memory" has some substance in reference to a kind of "mental scratchpad" where you can remember a few things for a short time, without memorizing them. There is no brain understanding of how that works, and there is no part of the brain that corresponds to such a "mental scratchpad." The idea of a working memory involving motor skills is not one that has much substance. Motor skills such as learning to swim or ride a bicycle require repeated practice sessions. It is clear that brains have some involvement in muscle movement. But such a relation does nothing to show that your brain produces your mind. 

November 24, 2025: "50 Years of Busting Myths About Aging in the Brain." We have an interview with a neuroscientist Carol Barnes who studies aging in brains. Barnes makes this groundless claim: "For my PhD thesis, I was to build off work that had discovered the biological basis of memory, long-term potentiation (LTP)." No one has ever discovered any biological basis for memory. The claim that LTP is any such thing is a groundless legend of neuroscientists (as I discuss in my posts here). Utterly unable to account for memories that can last decades, LTP is a very short-lived change produced by artificial electrode stimulation. The very term "long-term potentiation" is a misleading one, as so-called LTP typically decays away with days or weeks. 

Barnes then proceeds to recite another groundless legend of neuroscientists, the legend that "place cells" were discovered. She says, "In my postdoctoral work with John O’Keefe, we looked at the first recordings from place cells in young and old rats." A look at O' Keefe's research on this topic will show that it was not robust research. Here is a quote from a previous post of mine:

"The 'place cells' papers of John O'Keefe that I have examined are papers that do not meet standards of good experimental science. An example of such a paper was the paper 'Hippocampal Place Units in the Freely Moving Rat: Why They Fire Where They Fire.'  For one thing, the study group size used (consisting of only four rats) was way too small for robust evidence to have been produced. 15 animals per study group is the minimum for a moderately convincing result in animal studies looking for correlations.  For another thing no blinding protocol was used. And the study was not a pre-registered study, but was apparently one of those studies in which an analyst is free to fish for whatever effect he may feel like finding after data has been collected, using any of innumerable possible analysis pipelines."

We have in the interview no claims by Barnes of any substance backing up claims that brains produce minds or that brains store memories. All of her main references to research are references to weak, unconvincing research. 

November 3, 2025: "Making the Brain Language Ready: A Journey of Discovery."   We have an interview with neuroscientist Peter Hagoort, who has tried to show some brain basis for language ability. Hagoort talks on and on, but fails to show any knowledge of how a brain could acquire a language or allow people to speak as rapidly as they do. He mentions aphasia. A person can have a stroke that prevents him from being able to speak, or damages his ability to speak. But that merely shows that when people speak they are using muscles; and no one doubts that brains have a strong connection to muscle activity. 

September 10, 2025: "The Lasting Cognitive Effect of Smell on Memory." We have an interview with a scientist doing research on smells and memory. Without giving any specifics, the scientist makes vague claims trying to suggest links between smells, brains and memory. About all we know on this topic is that particular smells can evoke particular memories. 

August 14, 2025: "Language in the Brain is More Than the Sum of Its Parts."  We read this: "In a new paper in the Journal of Cognitive Neuroscience, Bourguignon and Salvatore Lo Bue make a case for language as an emergent property of the brain." Such talk is very misleading. A property of something is some simple characteristic that can be expressed by a single number. For example, properties of a wooden cube include length, width, depth, height, color and weight, each of which can be expressed by a single number. Language is not a property, but a capability, an extremely impressive capability with many different aspects. So trying to explain language by describing it as a "property" is extremely erroneous. We hear from a neuroscientist named Bourguignon who says, "We argue for a broad, seemingly simplistic, yet unbiased, conception of language as 'any piece of information that can be verbalized.' " That is a senseless conception of language.  It sounds more simpleminded than "seemingly simplistic." Language is something gigantically more than "any piece of information."  Bourguignon confesses that "the very notion that language cannot be pinpointed in a neatly delineated area or network of the brain will be surprising to some people, and unpalatable to others." That sounds like what we might find if brains cannot explain language. 

June 30, 2025: "Exploring Auditory Interconnectivity One Sound at a Time."  We have an interview with a neuroscientist studying the brain and hearing. Nothing discussed backs up "brains make minds" claims (as opposed to mere "brains are involved in hearing" claims). 

May 29, 2025: "How Was Your School Day?: Unpacking Free Recall in Young Children." We hear nothing of any real relevance to "brains make minds" or "brains store memories" claims. 

April 21, 2025: "Moving Beyond Traditional Pathways in Cognitive Neuroscience." We hear nothing of any real relevance to "brains make minds" or "brains store memories" claims. 

April 1, 2025: "CNS 2025: Day 4 Highlights." We have only skimpy one-sentence mentions of presentations at a scientific conference. 

April 1, 2025: "How VR Technology is Changing the Game for Alzheimer’s Disease." The headline is incorrect. VR technology (virtual reality technology) is not "changing the game" in regard to Alzheimer's disease. We hear merely about some fancy virtual reality system used to diagnose Alzheimer's disease. The disease can be diagnosed through much simpler methods. 

April 1, 2025"CNS 2025: Day 3 Highlights." We have only skimpy one-sentence mentions of presentations at a scientific conference. 

March 31, 2025: "How Dreams, Novelty, and Emotions Can Shape Memories: Lessons from Smartphone Studies."  We have a discussion of some research involving memory, conducted with smartphones. It is pure psychology research. The authors claim connection between sleep and memory recall. But there's no discussion of any brain data, so the post does nothing to back up "brains store memories" claims. 

March 31, 2025"CNS 2025: Day 2 Highlights." We have only skimpy one-sentence mentions of presentations at a scientific conference. 

March 30, 2025"CNS 2025: Day 1 Highlights." We have only skimpy one-sentence mentions of presentations at a scientific conference. 

It would seem from the last year of posts at the blog of the Cognitive Neuroscience Society that cognitive neuroscientists are not making any real progress in their attempts to provide evidence backing up claims that brains make minds and that brains store memories.