Sunday, March 26, 2023

Adult Neurogenesis Claims Are a Case Study of Neuroscientists Dogmatically Asserting Dubious Claims

Do human adult brains create new brain cells or neurons? The neuroscientist belief community cannot get its story straight on this topic. Very many neuroscientists matter-of-factly claim that the brains of human adults do create new neurons, and thereby assert as fact the doctrine of adult neurogenesis. Many other neuroscientists matter-of-factly claim that the brains of human adults do not create new neurons, and thereby deny the doctrine of adult neurogenesis.

Neuroscientists Teaching the Doctrine of Adult Neurogenesis

Let us first look at some neuroscientists who have asserted the doctrine of adult human neurogenesis, by claiming that human adults do create new brain cells. 

  • In a 1998 paper "Adult Neurogenesis in the Adult Human Hippocampus, one that has been cited more than 8000 times, scientists claimed "we demonstrate that new neurons, as defined by these markers, are generated from dividing progenitor cells in the dentate gyrus of adult humans," and that "Our results further indicate that the human hippocampus retains its ability to generate neurons throughout life." 
  • A 2014 paper claimed, "Our findings demonstrate a unique pattern of neurogenesis in the adult human brain."
  • A 2015 paper claimed that "it is now known that neurogenesis persists throughout the human lifespan, and new neurons are being formed in the adult brain."
  • A 2021 paper claimed that "most published studies provide evidence for childhood and adult neurogenesis in the human brain stem cell niches."
  • A 2022 paper predicted that future studies "will confirm our indications that adult human neurogenesis is orchestrated in a broad brain area."
  • A 2018 paper notes that "Renewed discussion about whether or not adult neurogenesis exists in the human hippocampus, and the nature and strength of the supporting evidence, has been reignited by two prominently published reports with opposite conclusions," but nonetheless says "there is currently no reason to abandon the idea that adult-generated neurons make important functional contributions to neural plasticity and cognition across the human lifespan."

Neuroscientists Recently Denying the Doctrine of Adult Neurogenesis

Let us now look at some neuroscientists who have denied the doctrine of adult human neurogenesis, by denying that human adults  create new brain cells. 

  • A 2018 paper states, "Our recent observations suggest that newborn neurons in the adult human hippocampus (HP) are absent or very rare (Sorrells et al., 2018)." The paper notes that "studies supporting the presence of adult human hippocampal neurogenesis are not consistent with each other: some report a sharp decline and small, negligible contribution in adults...others support continuous high levels of neurogenesis in old age (Spalding et al., 2013; Boldrini et al., 2018), but show extremely high variability."
  • A 2018 paper states "2 independent papers coming from different parts of the world have used a similar approach and methodology leading to converging results and the following similar conclusions: hippocampal neurogenesis in humans decays exponentially during childhood and is absent or negligible in the adult." It says these papers "are Sorrells et al. (2018) from the lab of Alvarez-Buylla in USA published in March in Nature, and the study by Cipriani and coworkers from the Adle-Biassette’s lab in France published in this issue of Cerebral Cortex (2018; 27: 000–000)."
  • A 2018 article in The American Scientist (co-authored by Sorrells and Alvarez-Buylla) is entitled "No Evidence for New Adult Neurons."  It said, "Adult human brains don’t grow new neurons in the hippocampus, contrary to the prevailing view." The authors previous reports of adult neurogenesis partially by saying they "frequently used only a single protein to identify new neurons," which was a faulty technique because "we found that the protein most often used, one called doublecortin, can also be seen in nonneuronal brain cells (called glia) that are known to regenerate throughout life." A 2022 article entitled "Doublecortin and the death of a dogma" refers to work by Franjic, saying, "out of the 139,187 nuclei sequenced, only 2 showed appropriate transcriptomes for neural precursor cells... suggesting adult human neurogenesis is rare, if it occurs at all."
  • A 2019 paper says that "a balanced review of the literature and evaluation of the data indicate that adult neurogenesis in human brain is improbable," and that "several high quality recent studies in adult human brain, unlike in adult brains of other species, neurogenesis was not detectable."
  • A 2018 paper claimed that "New neurons continue to be generated in the subgranular zone of the dentate gyrus of the adult mammalian hippocampus," but the paper's title was "Human hippocampal neurogenesis drops sharply in children to undetectable levels in adults."
  • Describing his research, the neuroscientist  Ashutosh Kumar stated in 2020 that he had found this: "Progression of neurogenesis is restricted after childhood, and reduces to negligible levels around adolescence and onwards."
  • A 2022 paper was entitled "Mounting evidence suggests human adult neurogenesis is unlikely."
  • A 2022 paper states, "In this review, we will assess critically the claim of significant adult neurogenesis in humans and show how current evidence strongly indicates that humans lack this trait." The paper states that "In summary, a thorough review of the literature shows that there is no scientific convincing evidence of the generation and incorporation of new neurons into the circuitry of the adult human brain, including the dentate gyrus of the hippocampus."  Noting how false claims persist within neuroscience, the paper states, "As Victor Hamburger, co-discoverer of the nerve growth factor, said at an informal meeting: 'A single report of an incorrect finding that many people like, takes more than hundreds of papers with negative findings to make an acceptable correction.' ” 
  • A 2021 paper was entitled "Positive Controls in Adults and Children Support That Very Few, If Any, New Neurons Are Born in the Adult Human Hippocampus."

What the examples here mainly show is an example of the tendency of many neuroscientists to state as if they were facts extremely dubious claims. That goes on all over the place in the world of neuroscience. It occurs most commonly when neuroscientists make extremely dubious and unproven claims that memories are stored in the human brain, and that the human brain is the cause of human thinking, self-hood and imagination. 

Sunday, March 19, 2023

Wobbly Hand-Waving Guesswork of the Synapse Memory Theorists

Let us look at a science paper attempting to convince us that memories are stored in synapses. The 2018 paper by Wayne S. Sossin is entitled "Memory Synapses Are Defined by Distinct Molecular Complexes: A Proposal." You should take careful note of the phrase "a proposal" at the end. It indicates the speculative nature of what Sossin writes about. To make a proposal about some possibility typically does not have a hundredth of the worth of making an observation showing the likelihood of such a possibility. 

The abstract of the paper incorrectly states "there are strong evidential and theoretical reasons for believing that memories are stored at synapses." No such reasons exist, but there are very strong reasons for rejecting such a claim, such as the very short lifetimes of the proteins that make up synapses (less than two weeks), and the instability of the dendritic spines such synapses are attached to.  The body of the paper begins by stating, "Most neuroscientists believe that memories are encoded by changing the strength of synaptic connections between neurons (Mayford et al., 2012Poo et al., 2016)." The two references do not take us to any papers showing that most neuroscientists believe such claims about synapses. It merely takes us to two papers in which a few neuroscientists seem to speak as if they support such claims. 

The claim that most neuroscientists believe such a thing about synapses is a common claim, but it is never well-supported. How could you substantiate such a claim, which is not a claim about synapses themselves, but instead a claim that most neuroscientists believe some particular thing about synapses?  The only way you could do that is by referring to an opinion poll taken of neuroscientists, one that showed that a majority of them believed that memories are stored in synapses. For the opinion poll to be persuasive, it would have to be a secret ballot poll. Any kind of "show of hands" claim would not be persuasive, because there are sociological reasons and psychological reasons why scientists might prefer to publicly act like they are going along with some majority, rather than publicly defying such a majority.  But secret ballot polls of neuroscientists virtually never are done.  We have no actual evidence that a majority of neuroscientists believe that memories are stored in synapses. We merely have quite a few neuroscience papers in which authors claim that most neuroscientists believe such a thing. Even if it were true that most neuroscientists believe that memories are stored in synapses, that would not show the likelihood of such a thing. There are all kinds of sociological and groupthink and "follow the herd" reasons why an erring belief tradition about synapses might arise in the neuroscientist community.

The second sentence of the body of the paper makes a misleading  claim: "The great success of deep learning systems based on units connected by modifiable synaptic weights has greatly increased the confidence that this type of computational structure is a powerful paradigm for learning."  So-called "neural nets" do not use any arrangement matching an arrangement of matter in the brain, and never should have been called "neural nets."  The parts of a neural net that are sometimes called "synaptic weights" but mainly just called "nodes" or "weights" are stable electronic units or software variables that bear no resemblance to unstable non-electronic synapses in the brain. Computerized "deep learning" systems do nothing to substantiate the idea that synapses store memories. 

Without trying to summarize the reasoning of the paper, I can give you a taste of how speculative it is by simply quoting statements in it that use the words "may," "could" or "might":

"The selection of a neuron to participate in a memory may also leave long-lasting transcriptional marks such as changes in histone and DNA methylation, and long-term changes in the organization of the nucleus....If transcription were also required for the maintenance of memory, it would suggest that a form of synaptic tagging may also be important for the maintenance of memory. However, in synaptic tagging the half-life of the tag has been measured to be at most a few hours (Martin et al., 1997Frey and Morris, 1998b), whereas the half-life of the tag would need to be much longer for it to play a role in the maintenance of memory....Conformational changes induced by the priming phosphorylations may also be important in maintaining binding interactions important for localization of the PKMs and may be important for the ability of these reagents to work as isoform-specific dominant negatives....A model for a positive feedback loop of phosphorylation has been proposed...Compensation for the loss of PKMζ may be due to cleavage of PKCι to PKMι by calpains, although this is purely speculative at present.... Increasing AMPA receptor endocytosis may play a role in active forgetting.... Inhibition of BRAG2 may be a target of persistent kinase phosphorylation (Sacktor, 2011). However, since many forms of LTD require BRAG2 (Scholz et al., 2010), the specific removal of AMPA receptor complexes at memory synapses may also require distinct motifs or adaptors that are regulated by persistent kinase activity in addition to BRAG2....Connection between two neurons may consist of a stable component and a memory component. Removal of either component may be sufficient to reduce synaptic strength and thus compromise memories that depend on this increase in synaptic strength....Memory synapses or memory modules at synapses may be defined through specific trans-synaptic adhesive interactions that align the AMPA receptor complexes specific for memory with specific presynaptic molecular complexes. These presynaptic molecular complexes may also have specializations important for memory...There may be multiple adhesion pairs that define distinct types of memory synapses or memory modules at synapses. One can envision trans-synaptic adhesion as defining what is often referred to in the plasticity literature as a “slot.”...An appealing model would be that as well as causing endocytosis of the AMPA receptors, the 'slot' proteins would also be a target of endocytosis after inhibition of the persistent protein kinase....The concept of a memory synapse remains an unproven hypothesis.  Generating a memory synapse may require multiple rounds of modifications...Both pre-and postsynaptic gene expression may be required to generate specific adhesion proteins that attract specific AMPA receptor complexes and presynaptic specializations,

We also have these statements:

"The dominant negative PKMs could distinguish molecular complexes involved in associative and non-associative LTF, interfering specifically at synapses with one type of complex but not the other,.. Inhibition of persistent kinases in the presynaptic cell could also lead to endocytosis of the presynaptic partner of the adhesion complex."

This speculative gobbledygook does not at all add up to a real theory of how synapses could store memories. All that we have here is some jargon making noises that may sound like an explanation to some people. A real theory of synaptic memory storage would give us specific hypothetical examples describing precisely how some specific learned information (such as the statement "my dog has fleas") could be stored in synapses. We never get such specifics from synaptic memory theorists. We basically get just jargon-laden hand waving, often decorated with poorly documented equations designed to impress us. 

A 2019 paper gives us another example of wobbly hand-waving guesswork. We read this:

"A signal complex like CaMKII/Tiam1 is proposed as a molecular memory... CaMKII may also act as an activity-dependent scaffold to assemble proteins at the synapse in addition to F-actin binding through CaMKIIβ....CaMKIIβ may play a structural role in targeting the RAKECs of CaMKII in the synapses via actin...These knock-in CaMKIIα molecules may have a dominant negative effect to form RAKECs in the synapses...These kinases may form a RAKEC with their specific substrates or upstream kinases....The RAKEC may be a general mechanism for the maintenance of the biochemical activity of kinase and its substrate...Synaptic localization of βPIX may also be regulated by LLPS."

Then we have in the same paper these uses of the word "could"

"Actived [sic] CaMKII could be diluted by inactive CaMKII from the cytosol....Upregulation of the CaMKIIα protein after LTP could further dilute activated CaMKIIα...The regulation of F-actin could be a candidate for LTP maintenance....CaMKII could act as an activity-dependent scaffold for assembling signaling proteins in the synapse...The interaction of CaMKII with synaptic proteins, including NMDAR, could be phase-separated from other synaptic proteins...The RAKEC between Tiam1 and CaMKII may serve not only as signal machinery to maintain Rac1 activity but could also form the LLPS, in which many synaptic signaling proteins are assembled together or are put on standby to act efficiently and effectively to maintain LTP for longer periods."

A diagram in the paper has a circle with "CAMKII" inside it, surrounded by eight text boxes, five of which have a question mark at the end. I cannot recall ever seeing so many question marks in a scientific diagram.  At the bottom of the diagram we see an arrow pointing to the words "memory maintenance," which hardly makes sense given that the average lifetime of a  CAMKII protein molecule is a mere 30 hours (according to the paper here). 

There is a reason why both papers quoted above are unsubstantial. You cannot have a substantial theory of the long-term preservation of memories in the human brain (with memories being preserved for decades) unless you first have a detailed theory of how human learned knowledge could be represented in a brain. Neither of these papers advance any such theory or endorse any such theory.  There simply does not exist a detailed theory of how the many diverse forms of human learned knowledge and human experience could be represented in a brain. If you don't have such a theory, any set of speculations about "memory maintenance" or the lifelong preservation of memories in a brain is a "castle on a cloud" affair rather like a penthouse apartment without any apartment building underneath it. 

One of the papers above makes a speculative appeal to a "feedback loop." Appeals to the possibility of such "feedback loops" appear often in the speculative papers of brain memory theorists. Such appeals are sterile and futile, being based on the erroneous simplistic nonsense that "synapse strengthening" can explain memory. To explain a physical storage of human experiences and learned information, you would need some incredibly complicated and multifaceted system (unlike any yet discovered in the brain) that would need to be a billion times too complex to be centered around mere strengthening.  Having foolishly got in bed with the vacuous notion that memories can be explained by some mere strengthening, our synapse theorists then think that they can account for the gigantic discrepancy between the short lifetimes of synapses and their dendritic spines and the short lifetimes of synapse proteins, by speculating about "feedback loops" that might preserve the strength of some synapse. But since the whole original idea that a mere strength level could store a memory was very foolish, this appeal to never-discovered "feedback loops" preserving synapse strength is equally foolish. Similarly, thinking that information can be stored in clouds is foolish, so thinking that information could be preserved in clouds long-term through "cloud-to-cloud information transfer" is an idea that is futile and sterile.  

In the papers presenting such speculations, we typically see a simple-looking diagram mentioning a feedback loop. The diagram below gives you a better idea of the kind of thing that would have to be going on for "feedback loops" to be preserving information in unstable synapses. Every little token of information would need to have its own special "feedback loop" to preserve that particular type of information. So the stable storage of as simple a piece of information as "my dog has fleas" in unstable synapses would require many types of feedback loops, as shown in the diagram below:

memory maintenance feedback loop

You may realize how utterly nonsensical it is to imagine such a thing when you realize that the chemical units imagined to implement these feedback loops would be short-lived chemicals, and that synapses are attached to dendritic spines that do not last for years, and often don't even last for six months.  So such "feedback loops" would themselves be very unstable, and could never account for stable memories that last for decades. 

One of the papers quoted above speculates that CAMKII might be the key to some miracle of "memory maintenance" by which memories last for decades while stored in synapses that don't last for years, built of proteins that last for less than two weeks. Contrary to such speculations, a 2019 paper states, "Overall, the studies reviewed here argue against, but do not completely rule out, a role for persistently self-sustaining CaMKII activity in maintaining LTP and LTM [long-term memory]." A speculative appeal has also been made to the idea that a protein  PKCζ might help cause such a miracle of memory maintenance, but a paper says that "LTP and memory are retained in PKCζ knockout mice," referring to mice genetically modified to have no such  PKCζ.  The paper tells us, " In the strongest sense of ‘necessity’ for LTP/memory maintenance, therefore, neither CaMKII nor PKMζ meets the criterion."

The papers quoted above were from a 2018 paper and 2019 paper, and in 2023 we still have nothing more weighty to substantiate claims that memories are stored in synapses. A 2022 paper makes this confession:

"We are still far from identifying the 'double helix' of memory—if one even exists. We do not have a clear idea of how long-term, specific information may be stored in the brain, into separate engrams that can be reactivated when relevant."

Another 2022 paper makes this confession: "How the brain stores and retrieves memories is an important unsolved problem in neuroscience."

Postscript: A 2022 paper has the very unjustified title "CaMKII: a central molecular organizer of synaptic plasticity, learning and memory." No robust evidence is produced for such claims, and we mainly have lots of uses of "may," "might," "perhaps" and "could." 

For example, here are the paper's uses of "may":

"CaMKII may exist in distinct populations in dendritic spines....CaMKII may serve as a ‘sponge’ to sequester various postsynaptic proteins, thereby triggering Ca2+-dependent trafficking of various proteins...Phosphorylation of SHANK3 by CaMKII may play a part here...This pool may represent cytosolic, unbound CaMKII molecules.... The interaction of CaMKIIα with α-actinin may also contribute to its interaction with F-actin...Reciprocal activation within a kinase–effector complex may be a general mechanism to maintain local CaMKII signalling for an extended period...This so-called inverse tagging of non-stimulated spines with ARC may increase the contrast of synaptic weights...CaV1.2-mediated influx may be sufficient to enable temporary activation of CaMKII...CaMKII may mediate LTP partly by phosphorylating TARPs...NMDAR-mediated calcium influx may make TARPs more accessible for CaMKII,...RHOA activation in response to glutamate uncaging may require other calcium-dependent processes in addition to CaMKII, which may indirectly regulate RHOA...RHOA may destabilize spines so that they can undergo structural changes during synaptic plasticity...Calcium signalling may be inhibited by CaMKII activation."

Here is where the paper uses the word "might":

"The CaMKII pools in spines that mediate these effects might be small and functionally defined by prolonged interactions of CaMKII with its various binding partners that enable them to contribute to the maintenance of basal transmission and long-term memory...There might be about 100-fold more CaMKII molecules in spines than GluN2B... LLPS of CaMKII might mediate not only the accumulation of CaMKII beneath the synaptic contact but also the formation of glutamate receptor nanodomains at the surface...That α-actinin also binds CaMKII might be important for the CaMKII mediated phosphorylation of S73 of PSD95...This might impair the inhibitory activity of the other helix in the dimer in trans, and thus facilitate substrate binding...CaMKII might need to maintain its own phosphorylation of T286 for continued interactions with other postsynaptic proteins... Synapses in a knockout neuron could be at a collective disadvantage and ‘lose out’ to other synapses formed nearby that might become stronger because resources of the presynaptic input neurons might be allocated to the synapses that they form with surrounding neurons....NMDAR-anchored CaMKII might phosphorylate nearby AMPARs that then become trapped when diffusing through AMPAR nanodomains...S831 phosphorylation augments single-channel conductance of AMPARs134,135, which might contribute to an increase in postsynaptic response during LTP induction."

Here is where the paper uses the word "could":

"It could be explained by an additional phosphorylation state or association with other proteins that protect CaMKII from phosphatase action..... These changes could occur at the sites of pre-existing CaMKII–SHANK3 complexes or could induce a redistribution of CaMKII from the spine interior to the PSD...Binding could provide a pool of CaMKII inside spines that can relocate moderately quickly during early phases of LTP... The kinase could be fully activated by Ca2+–CaM, resulting in increased EPSC amplitude....Other sources of calcium could drive CaMKII activation under basal, non-stimulated conditions to augment synaptic strength...This requirement for catalytic activity to increase AMPAR-mediated currents could reflect the need to phosphorylate other substrates... This increase in postsynaptic response could be mediated by recruitment of GluA1 homomeric AMPARs....A lasting increase in postsynaptic AMPAR function could be through phosphorylation of the auxiliary AMPAR subunit TARPγ2...That TTPL is not required for (non-fusion) TARPγ8 to localize AMPARs153 could be explained by a second, electrostatic interaction site between TARPs and the first PDZ of PSD95.... The phosphorylation of T305 and T306 reduces CaMKII–NMDAR binding157 and increases autonomous CaMKII activity, which could especially apply to phosphorylation sites in CaMKII target proteins that are important for LTD.'

Here is where the paper uses the words "hypothesized," "postulate," "perhaps" or "proposed":

"We postulate that the LLPS of CaMKII has two key roles in synaptic plasticity....It has been proposed that CaMKII acts to a large degree as an activity-dependent structural protein... The GluN2B–CaMKII interaction has been hypothesized to occur in a binding pocket different from the substrate-binding pocket...Such protein–protein interactions have been proposed to provide another mechanism for molecular memory during LTP....Early studies proposed that calcium–calmodulin (CaM)-dependent kinase II (CaMKII) has two discrete binding sites:....Most of this integration over seconds is perhaps due to the autonomous activity of CaMKII...Perhaps under basal neuronal-activity conditions, NMDAR-mediated calcium influx is more effective than CaV1.2-mediated influx in stimulating T305/T306 phosphorylation and thus preventing an increase in AMPAR activity....Perhaps CaMKII facilitates synaptic strengthening through trans-synaptic mechanisms...Perhaps the C terminus of GluA1-fused TARPγ8 has a higher propensity than the free TARPγ8 to detach from the plasma membrane and thus to bind to PSD95, independent of its CaMKII-mediated phosphorylation."

Thursday, March 16, 2023

Two Free E-Books With Most of This Blog's Posts

I had previously created a free E-book containing  the first half of this blog's posts, a book you can read online with the URL below:

https://archive.org/details/combinepdf_20200924

I have now created another free E-book containing  the second half of this blog's posts, a book you can read online with the URL below:

https://archive.org/details/failing_dogmas_of_brain_researchers  

The two books are very easy to read using the "native viewer" of www.archive.org. After clicking on the rectangular icon at the bottom, you will be able to read the whole book with continuous finger swiping. The only disadvantage of using this "native viewer" is that hyperlinks do not work on it. You can get around that by downloading a PDF version using the links above. You will be able to use hyperlinks from such a PDF file. 

To see the complete collection of my free books on www.archive.org, use the link below:

https://archive.org/search?query=creator%3A%22Mark+Mahin%22

Saturday, March 11, 2023

Misleading Tricks of the Latest Claim of Mind-Reading by Brain Scans

In a previous post entitled "Suspect Shenanigans When You Hear Claims of 'Mind Reading' Technology" I discussed some of the tricks used by people claiming that brain scans can reveal mind activity. I discussed one example. It was the case of a researcher who had used some incredibly elaborate analysis pipeline that included brain scans to create movies. I pointed out that brain scans were only one element in the extremely elaborate set of inputs, and that it was misleading to claim that the output movies were generated from brain scans. I stated this:

"This bizarre and very complicated rigmarole is some very elaborate scheme in which brain activity is only one of the inputs, and the main inputs are lots of footage from Youtube videos.  It is very misleading to identify the videos as 'clips reconstructed from brain activity,' as the clips are mainly constructed from data other than brain activity. No actual evidence has been produced that someone detected anything like 'movies in the brain.' It seems like merely smoke and mirrors under which some output from a variety of sources (produced by a ridiculously complicated process) is being passed off as something like 'movies in the brain.' "

Recently we had another case of the press fooling us with untrue claims about mind-reading brain scans.  The Daily Mail gave us this bogus headline: "Scientists can now read your MIND: AI turns people's thoughts into images with 80% accuracy." Vice.com gave us this equally untrue headline: "Researchers Use AI to Generate Images Based on People's Brain." Upon analyzing the scientific paper that inspired these stories, I was able to figure out how the misleading "sleight of hand" is being done. It's a "fool you" mashup methodology. 

The paper is the one you can here. It has the very misleading title "High-resolution image reconstruction with latent diffusion models from human brain activity." What is going on is that the researchers used an analysis methodology in which the actual brain scans are a superfluous input. They got AI outputs using a technique in which there was no need at all to use brain scans.  The paper title is misleading because it implies that such brain scans were a crucial input, when such brain scans were an unnecessary input. 

Below is an explanation of how it worked:

(1) There is a Natural Scenes Dataset that was created by some morally dubious excessive-seeming fMRI scanning in which subjects were brain-scanned with high-intensity 7T scanners for about 40 hours each while looking at natural scenes (a medically unnecessary risk to these subjects). That dataset is described in the paper here, entitled "A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence."

That dataset was created using images from the Microsoft Common Objects in Context (COCO) image dataset. The authors of the previously mentioned paper mentioned above say, "We obtained 73,000 color natural scenes from the richly annotated Microsoft Common Objects in Context (COCO) image dataset." These authors then brain-scanned people while they were looking at these images, using 7T fMRI scanners. 

(2) The authors of the new paper ("High-resolution image reconstruction with latent diffusion models from human brain activity") traced back the images of the Natural Scenes Dataset to their COCO source, as they admit by saying, "The images used in the NSD experiments were retrieved from MS COCO and cropped to 425 x 425 (if needed)." This COCO database includes text annotations for each image, which is one or more words identifying each image. The authors of the new paper ("High-resolution image reconstruction with latent diffusion models from human brain activity") clearly indicate that they grabbed these text annotations from the COCO database. They say they used an "average of five text annotations associated to each MS COCO image." 

(3) Having a text phrase associated with each image they used from the Natural Scenes Dataset, a phrase identifying what each image was, the authors used such text phrases as inputs to the Stable Diffusion generative AI, which can generate multiple images from text phrases.  In case you have not tried the Stable Diffusion AI, which you can try using this link,  it works like the example below. I typed in "spooky snowy castle" as the prompt, and the AI generated four images of spooky snowy castles:


(4) Some additional use was made of the actual brain scans from the Natural Scenes Dataset, but that was not necessary, and was probably just a little "icing on the cake," apparently as a way for the authors to kind of "cover their tracks" by some convoluted rigmarole making it harder for people to track down the main way their images were generated. 

(5) The authors then incorrectly claimed that they had done "image reconstruction ... from human brain activity." In fact, the human brain activity was a superfluous (unnecessary) input that was not a necessary part of the process. The method used to get the Stable Diffusion output images would have worked just fine without any brain scan data at all. All you need to get good Stable Diffusion output images of some particular type is a text prompt. And the researchers had got the appropriate text prompts by matching the images of the Natural Scenes Dataset with the source of the images (the MS COCO dataset), which has text phrases describing each of the images.  

This is a "smoke and mirrors" sleazy trick that you should not be fooled by. The authors incorrectly claimed that they had done "image reconstruction with latent diffusion models from human brain activity," when the human brain activity was not an essential input. The technique used here has no dependency on any brain scan data. The authors claim that they have shown you can "reconstruct high-resolution images with high semantic fidelity from human brain activity." This claim is untrue. Using only the brain activity data, the authors would be unable to create any images corresponding to what the subjects had seen when such brain scans were made. 

There were two cheats here: (1) using text annotations (descriptions of the images the brain scanned subjects saw), descriptive phrases which the brain-scanned subjects never heard or saw; (2) use of an image generating AI using these text phrases as inputs rather than inputs of the brain scans. The authors have not reconstructed what the subjects saw from their brain scans. The authors have used a sneaky data backdoor to get something they never could have got from such brain scans alone. A legitimate attempt to reconstruct what people saw from brain scans would have used only the brain scans, and would never have succeeded. You cannot identify or reconstruct from brain scans what people saw or thought while their brains were being scanned.  

What is going on here is something rather like the conversation below: 

Jack: Did you know I can tell which restaurant you went to from a list of the items you ordered?
Jill:  Really? Let's try.
Jack: Okay, just give me a receipt you got from some dinner you ordered.
Jill: Okay, here's my receipt from last night. 
Jack: Okay, let me see, you ordered a large pizza and 2 medium Pepsi drinks. Using my astonishing algorithm, I deduce that you went to Santino's Pizza Palace on 34th Street.
Jill: Wow, that's amazing -- you figured out where I ate from what I ordered!

Of course, Jack has done no such thing. Jack is cheating. He simply read the name of the restaurant from the bottom of the receipt. 

As long as I am mentioning the Natural Scenes Dataset, let me mention a very troubling fact about that dataset: that it was created by what seems like a recklessly excessive scanning of 8 subjects.  paper entitled "A massive 7T fMRI dataset to bridge
3 cognitive neuroscience and artificial intelligence" discusses some data collection used to create this dataset: a process in which eight subjects were brain scanned 30 to 40 times with 7T scanners twice as powerful as the 3T scanners or 1.5T scanners normally used for MRI scans.  The paper states this: 

"The total number of 7T fMRI scan sessions were 43, 43, 35, 33, 43, 35, 43, and 33 for subj01–subj08, respectively. The average number of hours of resting-state fMRI conducted for each subject was 2.0 hours, and the average number of hours of task-based fMRI conducted for each subject was 38.5 hours."

This was in addition to other 3T scans the subjects were given.  The paper makes no mention of any consideration of health risks to these people, who received $30 per hour for the medically unnecessary scans. A 7T scanner would presumably have more than twice the risks of the 3T scanners.  Referring to mere 3T MRI scans, the 2022 paper "The effects of repeated brain MRI on chromosomal damage" found that "The total number of damaged cells increased by 3.2% (95% CI 1.5–4.8%) per MRI." The paper was referring to "DNA breaks" that have a possibility of increasing cancer risks. There is no medical need for anyone to receive more than one or a few MRI scans. Scanning subjects for 40 hours with 7T scanners seems rather like playing Russian roulette with the health of subjects, who might one day get cancer or dementia from such excessive scanning. It is dismaying that people were lured into undergoing such risks for "chump change" payments such as $30 per hour. 

All future claims to be generating images from brain scans should be regarded with the greatest suspicion whenever such claims made any use of the Natural Images Dataset. I have explained above how there is a tricky "backdoor" method by which anyone can generate images from that dataset very similar to the images that the poor over-scanned subjects saw when the brain scans of that database were made. 

We can expect to see in the future some additional studies using sleazy tricks such as the one described here. There will be more and more confusing methodology papers that use complicated technological mashups that leverage AI and data backdoors.  Don't be fooled by such shenanigans. It is never possible to figure out what someone thought or saw from merely looking at brain scans, and any new paper suggesting otherwise will almost certainly be using complicated trickery designed to hide its sneaky sleight-of-hand. 

Postscript: Above I stated, "You cannot identify or reconstruct from brain scans what people saw or thought while their brains were being scanned." This statement is not at all discredited by papers such as the mistitled paper "The Code for Facial Identity in the Primate Brain." That paper does not meet good standards of experimental neuroscience. The paper is not a pre-registered paper that committed itself to one exact method of analysis before data was analyzed. The paper is one of those papers in which you get the suspicion that the authors were playing around with countless types of statistical analysis before ending up with what they reported. The analysis pipeline they report is some hopelessly convoluted and arbitrary rigmarole that fails to provide any convincing evidence for any such thing as a code for representing faces in brains. The statistics involved are so convoluted a can of worms (or perhaps we should say  "vat of worms") that it smells like irreproducible results.  I may note three fundamental failures:

(1) The lack of pre-registration, leaving the authors free to "keep torturing the data until it confessed."
(2) The lack of any blinding protocol, a necessity for a paper like this to be taken seriously.
(3) The use of only two monkey subjects (in a correlation study such as this, 15 subjects would be the minimum for a slightly impressive result). 

At the NBC News web site, we have a story entitled "From brain waves, this AI can sketch what you're picturing." The title gives the incorrect idea that scientists were trying to reconstruct what people were imagining (a common deceit of stories like this), although the actual study only involved what people were seeing while their brains were scanned. The story claims, "The resulting generated image matched the attributes (color, shape, etc.) and semantic meaning of the original image roughly 84% of the time." That's not a claim made by the scientific paper, which reports accuracy of only about 21%. in its "Results on Different Subjects" section. The study used annotations in the COCO database (text descriptions of the images), so it apparently used the same kind of data backdoor trick described above. Again, we are given the misleading impression that images are being reconstructed (or the content of images guessed) based solely on brain scans, when no such thing is happening. Instead the content of what someone saw is being guessed based on brain scans and lots of other data other than just the brain scans.  

Misrepresentations of what went on in studies of this type are extremely common in the press. We may be told that such and such a study identified what people were thinking or picturing when the study merely involved people whose brains were scanned when they were seeing something or speaking words. 

The latest in misleading poor-quality science papers trying to insinuate mind-reading by brain scans is the paper "Semantic reconstruction of continuous language from non-invasive brain recordings." A few subjects were brain-scanned for 16 hours, and attempts were made to predict what they had heard, using both brain scans and "a generative neural network language model that was trained on a large dataset of natural English word sequences" in order to get some ability to predict the words that would follow from a sequence of previous words. We read, "Given any word sequence, this language model predicts the words that could come next." The meager results produced had a statistical significance of only "p < .05," which is very unimpressive. That's the kind of result you would expect to get by chance in one out of 20 tries. Again, we have a misleading formula of  getting output using "brains scans plus some other huge thing" with the small claimed success coming mostly from the other huge thing, not the brain scans. No actual evidence has been provided that you can reconstruct what people were thinking or hearing from brain scans alone. For the sake of this piece of misleading parlor-trick junk science, some subjects had their brains scanned for 16 hours, a medically unnecessary risk to them which may have increased their chance of getting cancer or dementia. 

Ever-eager to produce misleading but interesting-sounding click-bait stories that help increase page views and advertising revenue, the press has jumped on this story, producing some very misleading stories that do not accurately describe the research,  and fail to tell us that the results mainly do not come from analysis of brain scans, but from some high-tech AI trained to anticipate the most likely words that would follow from some text. 

Sunday, March 5, 2023

Studies New and Old Fail to Show a Big Link Between Brain States and Minds

A prediction of the dogma that your brain makes your mind is that the more brain injuries you have had, the worse off your mind should be. But a paper in the journal Science ("Effects of Penetrating Brain Injury on Intelligence Test Scores") refers to "the large number of reports describing 'negative' findings -- that is, the absence of demonstrable deficits in test performance, despite the presence of large cerebral lesions, especially in the frontal lobes." The 1957 paper compared IQ tests for 60 armed force members who had their intelligence tested before penetrating brain injuries, and also had their intelligence tested after their brain injuries. Speaking of results on IQ tests, the paper states, "These analyses demonstrated that lesions of the frontal and occipital lobes did not produces a significant decline in score, and that only lesions of parietal or temporal lobes of the left hemisphere showed a significant decrease." The soldiers with lesions in these areas actually performed higher on IQ tests after their penetrating brain injuries, with an average of about a 7% increase:

  • The left nonparieto-temporal region
  • The right parietal region
  • The right temporal lobe
  • The right parietotemporal lobe
  • The right nonparieto-temporal region
  • The only decrease in IQ scores occurred with injuries to the left parietotemporal lobe. These results contradict the results of a new paper entitled "Graph lesion-deficit mapping of fluid intelligence." Instead of finding a decrease in intelligence after right frontal damage as reported by that new paper, the 1957 study found no decrease in intelligence after right frontal damage. The 1957 study used the Army General Classification Test, which is a more reliable test for intelligence than the Raven’s Advanced Progressive Matrices test used by the "Graph lesion-deficit mapping of fluid intelligence" study. One study found less than a 50% correlation between the Raven’s Advanced Progressive Matrices and full-scale IQ. The Raven’s Advanced Progressive Matrices test is a test designed for people of above average intelligence, and is not very suited for testing intelligence damage in people of average intelligence. 

    There are other reasons for doubting the "Graph lesion-deficit mapping of fluid intelligence" paper. The study hinges upon estimates of "premorbid IQ," someone's IQ before they had some brain damage. The study claims to have something called the "NART IQ," which is an IQ based on a test called the National Adult Reading Test. The National Adult Reading Test can be described as a "quick and dirty" way of very roughly estimating intelligence. It is used by doctors to get a rough idea about a patient's intelligence. Estimates of the correlation between a person's performance on the English NART test and the person's IQ have tended to be about .7, which is a fairly strong correlation, although not a very strong correlation. But a study tested the Dutch version of the NART test and found that it "its current form is not appropriate anymore to estimate premorbid IQ in both young and older adults," having a correlation with intelligence of less than .5. 

    The study here ("The Relationship of Brain-Tissue Loss Volume and Lesion Location to Cognitive Deficit")  tested IQ on 98 veterans with "penetrating brain wounds," finding those with wounds on the right side of the brain to have a mean IQ of 103, and those with wounds on the left side of the brain to have a mean IQ of 99. The paper "Neuropsychological and neurophysiological evaluation of cognitive deficits related to the severity of traumatic brain injury" studied the IQ of 90 patients, dividing them into three categories: mild traumatic brain injury, moderate traumatic brain injury, and severe traumatic brain injury. The mean IQ in each of these groups was about the same, being either 103 or 104. We read that "a surprising finding was that specific intelligence subtests did not show [sensitivity] even for differentiation between severe and mild injury." Such a result is surprising only to those who think your brain makes your mind, not those who reject such an idea. 

    A recent study was one that attempted to correlate brain volume and intelligence in 262 healthy brain-scanned persons with an age between 55 and 80. An objectionable aspect of the study is that intelligence was measured using only a type of test that young people are known to do better on. We are told, "The Block Design test from the revised form of Wechsler Adult Intelligence Scale [41] was used to assess visuospatial ability and fluid IQ."  If we follow the link in that statement, we come to a page telling us, "The results from this test show worse performance in older individuls."

    Despite having a chosen a test that is not a good general test of intelligence, presumably to get a more statistically significant result, the authors report only a mild correlation between gray matter change and cognitive change: an R of only .21. The upper left part of their figure 2A (shown below) shows more than 25 cases of people with less gray matter and more intelligence. The result fails to show any clear link between gray matter loss in aging and intelligence. 

    gray matter and IQ

     If the authors had used a better measure of intelligence (the full Wechsler Adult Intelligence test rather than only its Block Design test which seniors do worse on), the authors would have probably got a correlation smaller than the unimpressive correlation of only .21 that they report. 

    Recently a team of researchers decided to test the brain damage causes memory damage idea by using retirees of the National Football League, people who had played for years in the rough sport known as American football. Although they wear protective helmets,  people who have played a long time in the National Football League tend to have had one or more concussions, particularly if they played in positions where concussions more often (such as offensive lineman positions or defensive lineman positions).  Described in the press release here, the study "included 53 former NFL players age 50 or older as well as 26 healthy controls and 83 individuals with mild cognitive impairment or dementia who did not play collegiate or professional contact sports and matched as closely as possible to the NFL retirees by age and education." The retired NFL players in the study "had an average of 5.63 concussions, 8.89 years in the NFL, and 115.12 games played." 

    The press release for the study has a headline of "Head trauma doesn't predict memory problems in NFL retirees, UT Southwestern study shows."  We read this:

    "Previous studies have reported mixed findings on the relationship between head-injury exposure and neuropsychological functioning later in life. While some investigations have suggested former NFL players may exhibit lower verbal memory and executive function scores, others have not found differences compared to control groups, according to a review of the literature ...The [UT Southwestern] researchers report that retired football players had slightly lower memory scores compared to healthy peer controls but did not find this to be significantly associated with head-injury exposure."

    The scientific paper states that except for such slightly lower memory scores "no other group differences were observed, and head-injury exposure did not predict neurocognitive performance at baseline or over time."  There was little difference between people who had an average of six concussions and those who had no concussions. 

    The 2014 paper "No strong evidence for lateralisation of word reading and face recognition deficits following posterior brain injury" has some very good data comparing scores of people with strokes in the rear brain and controls. Table 3 shows no significant difference (denoted as NS) on 16 out of 19 of tests.

     A 1996 paper is entitled "Impaired Retrieval From Remote Memory in Patients With Frontal Lobe Damage." There were 7 patients, two of whom had about 50 milliliters of damage (about 5%). Their recognition scores on the Public Events test were only slightly less than normal, with testing covering recognition from 4 decades (Figure 2). The free recall of subjects with frontal lobe damage was a little less than average, and they showed no damage to recognition of Famous Faces (Figure 3), but were a little below average on free recall and cued recall. Figure 4 shows that after an "adjustment" there was basically no difference between the controls and the subjects with frontal lobe damage: