Friday, January 28, 2022

List of Breakthrough Prize Winners in Life Sciences Hints at the Lack of Progress in Cognitive Neuroscience

The Breakthrough Prize in Life Sciences is a 3-million dollar prize given for advances in biology.  The prize was founded by 2013 after donations by high-tech billionaires such as Mark Zuckerberg. Let's take a look at a list of all the Breakthrough Prize in Life Sciences that have been awarded since 2013, quoting from the wikipedia.org page that lists them:

  • "for the genetics of neural circuits and behavior, and synaptic guidepost molecules"
  • "for linkage mapping of Mendelian disease in humans using DNA polymorphisms"
  • "for the discovery of PI 3-Kinase and its role in cancer metabolism"
  • "for describing the role of Wnt signaling in tissue stem cells and cancer"
  • "for research on telomeres, illuminating how they protect chromosome ends and their role in genome instability in cancer"
  • "for discoveries in the mechanisms of angiogenesis that led to therapies for cancer and eye diseases"
  • "for the discovery of general principles for identifying human disease genes, and enabling their application to medicine through the creation and analysis of genetic, physical and sequence maps of the human genome"
  • "for cancer genes and targeted therapy"
  • "for characterization of human cancer genes"
  • "for induced pluripotent stem cells"
  • "for cancer genomics and tumor suppressor genes"
  • "for the discovery of T cell checkpoint blockade as effective cancer therapy"
  • "for defining the interlocking circuits in the brain that malfunction in Parkinson’s disease – this scientific foundation underlies the circuit-based treatment of Parkinson’s disease by deep brain stimulation"
  • "for the discovery of Target of Rapamycin (TOR) and its role in cell growth control"
  • "for discoveries leading to the development of controlled drug-release systems and new biomaterials"
  • "for the discovery of genes and biochemical mechanisms that cause hypertension"
  • "for discovering critical molecular determinants and biological functions of intracellular protein degradation"
  • "for the discovery and pioneering work on the development of high-frequency deep brain stimulation (DBS), which has revolutionized the treatment of Parkinson’s disease"
  • "for the discovery of covalent modifications of histone proteins and their critical roles in the regulation of gene expression and chromatin organization, advancing the understanding of diseases ranging from birth defects to cancer"
  • "for the discovery of a new world of genetic regulation by microRNAs, a class of tiny RNA molecules that inhibit translation or destabilize complementary mRNA targets"
  • "for harnessing an ancient mechanism of bacterial immunity into a powerful and general technology for editing genomes, with wide-ranging implications across biology and medicine"
  • "for the development and implementation of optogenetics – the programming of neurons to express light-activated ion channels and pumps, so that their electrical activity can be controlled by light"
  • "for discovering mutations in the amyloid precursor protein (APP) gene that cause early onset Alzheimer’s disease, linking accumulation of APP-derived beta-amyloid peptide to Alzheimer’s pathogenesis and inspiring new strategies for disease prevention"
  • "for the discovery of human genetic variants that alter the levels and distribution of cholesterol and other lipids, inspiring new approaches to the prevention of cardiovascular and liver disease"
  • "for pioneering the sequencing of ancient DNA and ancient genomes, thereby illuminating the origins of modern humans, our relationships to extinct relatives such as Neanderthals, and the evolution of human populations and traits"
  • "for elucidating how eukaryotic cells sense and respond to damage in their DNA and providing insights into the development and treatment of cancer"
  • "for discovering the centrality of RNA in forming the active centers of the ribosome, the fundamental machinery of protein synthesis in all cells, thereby connecting modern biology to the origin of life and also explaining how many natural antibiotics disrupt protein synthesis"
  • "for pioneering research on the Wnt pathway, one of the crucial intercellular signaling systems in development, cancer and stem cell biology"
  • "for elucidating autophagy, the recycling system that cells use to generate nutrients from their own inessential or damaged components"
  • "for discoveries of the genetic causes and biochemical mechanisms of spinocerebellar ataxia and Rett syndrome, findings that have provided insight into the pathogenesis of neurodegenerative and neurological diseases"
  • "for discovering how plants optimize their growth, development, and cellular structure to transform sunlight into chemical energy"
  • "for elucidating the unfolded protein response, a cellular quality-control system that detects disease-causing unfolded proteins and directs cells to take corrective measures"
  • "for elucidating the sophisticated mechanism that mediates the perilous separation of duplicated chromosomes during cell division and thereby prevents genetic diseases such as cancer"
  • "for elucidating the molecular pathogenesis of a type of inherited ALS, including the role of glia in neurodegeneration, and for establishing antisense oligonucleotide therapy in animal models of ALS and Huntington disease"
  • "for the development of an effective antisense oligonucleotide therapy for children with the neurodegenerative disease spinal muscular atrophy"
  • "for determining the consequences of aneuploidy, an abnormal chromosome number resulting from chromosome mis-segregation"
  • for discovering hidden structures in cells by developing super-resolution imaging – a method that transcends the fundamental spatial resolution limit of light microscopy"
  • "for elucidating how DNA triggers immune and autoimmune responses from the interior of a cell through the discovery of the DNA-sensing enzyme cGAS"
  • "for the discovery of a new endocrine system through which adipose tissue signals the brain to regulate food intake"
  • "for discovering functions of molecular chaperones in mediating protein folding and preventing protein aggregation"
  • "for discovering molecules, cells, and mechanisms underlying pain sensation"
  • "for discovering TDP43 protein aggregates in frontotemporal dementia and amyotrophic lateral sclerosis, and revealing that different forms of alpha-synuclein, in different cell types, underlie Parkinson’s disease and Multiple System Atrophy"
  • "for developing technology that allowed the design of proteins never seen before in nature, including novel proteins that have the potential for therapeutic intervention in human diseases"
  • "for deconstructing the complex behavior of parenting to the level of cell-types and their wiring, and demonstrating that the neural circuits governing both male and female-specific parenting behaviors are present in both sexes"
  • "for discovering that fetal DNA is present in maternal blood and can be used for the prenatal testing of trisomy 21 and other genetic disorders"
  • "for elucidating a quality control pathway that clears damaged mitochondria and thereby protects against Parkinson’s Disease"
  • "for elucidating the molecular basis of neurodegenerative and cardiac transthyretin diseases, and for developing tafamidis, a drug that slows their progression"
  • "for engineering modified RNA technology which enabled rapid development of effective COVID-19 vaccines"
  • "for the development of a robust and affordable method to determine DNA sequences on a massive scale, which has transformed the practice of science and medicine"
In the list above there is a lack of any breakthroughs from the area of cognitive neuroscience, with the possible exception of the one line referring to parenting behaviors. We hear no mention of the words "memory" or "consciousness" or "cognition" or "learning" or "understanding" or "thinking." 

Let's look at the only line above referring to something from cognitive neuroscience: the line that makes a misleadingly broad reference to someone "demonstrating that the neural circuits governing both male and female-specific parenting behaviors are present in both sexes." No one has actually shown that neural circuits govern any type of behavior in any organism.  The line is referring to a 2021 award to Catherine Dulac.  If we look at the corresponding paper she co-authored ("Galanin neurons in the medial preoptic area govern parental behavior"), we will find  nothing very impressive. It is an experimental paper merely dealing with an extremely narrow topic: mice and their behavior when presented with never-before-seen baby mice (called mice pups).  The paper claims to have altered behavior of mice when presented with unfamiliar mice pups, by altering the brains of the mice.  

Unfortunately, the paper fails to be a robust demonstration, because it often uses study group sizes smaller than 15, as small as only 8.  15 subjects per study group is the minimum needed for a robust experimental demonstration. Also the paper fails to discuss how a serious blinding protocol was implemented, merely mentioning two cases in which an observer was blind to something, rather than mentioning in detail how a thorough blinding protocol was implemented.  In the "Statistics" part of the paper the authors confess their failure to do a sample size calculation, which is a calculation done to make sure that adequate sample sizes were used. They state, "The sample sizes in our study were chosen based on common practice in animal behavior experiments."  That is the kind of thing that people state when they failed to calculate the sample sizes needed for a robust result.  It is well known that these days neuroscience experimenters are habitually failing to use adequate sample sizes, with such a failure being more the rule than the exception, as discussed in the widely cited paper "Power failure: why small sample size undermines the reliability of neuroscience."  So when a paper says "the sample sizes in our study were chosen based on common practice in animal behavior experiments," we should treat that as a confession that a poor practice was followed. 

So the only claimed  "breakthrough" in the field of cognitive neuroscience turns out to be a "small potatoes" affair, something that did not follow experimental best practices, and does not qualify at all as a breakthrough.  

The list above is a kind of Exhibit A that I can cite to back up my claim that no progress has been made in supporting neruoscientist dogmas that brains store memories or that brains are the source of human minds.  In the past decade hundreds of millions of dollars have been doled out to our cognitive neuroscientists, but they have had no success in substantiating the claims they keep making about brains storing memories and brains producing minds. 

The list above also suggests two other things:
  • No major progress is being made by biologists in understanding the origin of life. The only reference to the origin of life in the list above is a superfluous and unwarranted claim that a discovery about "the centrality of RNA in forming the active centers of the ribosome" has accomplished some feat of "connecting modern biology to the origin of life," a vague and vacuous phrase that does not really mean much of anything. 
  • No major progress is being made by scientists in understanding  morphogenesis, how the enormously organized state of a full human body is able to gradually arise from the million-times-simpler state of a speck-sized egg. The list above mentions no progress in the field of developmental biology. 

dissident scientist

A lack of progress in cognitive neuroscience is suggested by the quote below from a recent neuroscience paper

"Neuroscience is at the stage biology was at before Darwin. It has a myriad of detailed observations but no single theory explaining the connections between all of those observations. We do not even know if such a brain theory should be at the molecular level or at the level of brain regions, or at any scale between." 

Wednesday, January 19, 2022

Integrated Information Theory's Tangled Metaphysics Does Nothing to Explain Consciousness

A theory called "integrated information theory" purports to be a theory of consciousness. We should always be suspicious of any theory claiming to be a "theory of consciousness."  "Consciousness" is the most reductive term you could use to describe human minds and human mental experience.  A person trying to explain  a human mind by advancing what he calls a "theory of consciousness" is rather like a person trying to explain planet Earth by advancing what he calls a "theory of roundness." Just as roundness is only one aspect of planet Earth,  consciousness is only one aspect of the human mind and human mental experience. What we need is not a "theory of consciousness" but something very much harder to create: a theory of mentality that includes all of the main aspects of human mentality (including consciousness, comprehension, thinking, memory, imagination and creativity). 

When I go a website devoted to selling integrated information theory (www.integratedinformationtheory.org), I get a home page that has at its first link a link to a paper behind a paywall. But the second link is to a paper that anyone can read. Let's take a close look at that paper, entitled, "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0," and authored by Masafumi Oizumi, Larissa Albantakis, and Giulio Tononi . 

The abstract of the paper should leave us very discouraged about integrated information theory:

"This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as 'differences that make a difference' within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true 'zombies' – unconscious feed-forward systems that are functionally equivalent to conscious complexes."

We have heard in this abstract no sign that any compelling reasoning will appear in the paper. To the contrary, we have got two signals that the paper will be pushing nonsense. The first signal is the  absurd insinuation that logic gates (a low-level building block of a digital system) can be somehow configured to generate conscious experience.  The second signal is the claim that "simple systems can be minimally conscious."  There are minimally conscious organisms on our planet, but none of them are simple systems. When we consider all of the complexity of its cells, each as complex as a factory, we should realize that even the simplest maybe-barely-conscious ant is not at all a simple system. 

After the paper asks a bunch of questions, in the section entitled "Models" we read this: "The main tenets of IIT can be presented as a set of phenomenological axioms, ontological postulates, and identities." That sounds like metaphysics, not anything like a scientific theory. 

After the paper defines an "axiom" as a self-evident truth, we read some "axiom" defined by the paper.  One of these "axioms" is listed as follows:

"COMPOSITION: Consciousness is compositional (structured): each experience consists of multiple aspects in various combinations. Within the same experience, one can see, for example, left and right, red and blue, a triangle and a square, a red triangle on the left, a blue square on the right, and so on."

It is not true that "each experience consists of  multiple aspects in various combinations," although many experiences do consist of such a thing. A person can have a simple experience consisting of a single aspect. For example, you may lie on a beach looking up at a clear blue sky, while thinking of nothing. Such consciousness has only one aspect: your perception of the blueness above you.  Similarly, while waiting to fall asleep at night with your eyes closed, you may perceive nothing and be thinking of nothing.  Such an experience does not consist of  "multiple aspects in various combinations."

We then read this "axiom":

"INFORMATION: Consciousness is informative: each experience differs in its particular way from other possible experiences. Thus, an experience of pure darkness is what it is by differing, in its particular way, from an immense number of other possible experiences. A small subset of these possible experiences includes, for example, all the frames of all possible movies." 

No, it is not correct that "consciousness is informative." Something is informative if it supplies information.  Consciousness by itself does not supply information. A conscious person may or not be involved in supplying information.  

We then read this "axiom":

"INTEGRATION: Consciousness is integrated: each experience is (strongly) irreducible to non-interdependent components. Thus, experiencing the word 'SONO' written in the middle of a blank page is irreducible to an experience of the word 'SO' at the right border of a half-page, plus an experience of the word 'NO' on the left border of another half page – the experience is whole. Similarly, seeing a red triangle is irreducible to seeing a triangle but no red color, plus a red patch but no triangle."

The word "integrated" means "with various parts or aspects linked or coordinated."  The human mind may be thought of as being integrated (for example, consciousness is linked with memory and understanding). But mere consciousness is not intrinsically integrated. At some moment I may be aware of the blue sky ahead of me, but such awareness does not consist of multiple parts.  An experience does not have to consist of multiple parts.  As for the logic about "SONO" written on a blank page, of course that is "irreducible to an experience of the word 'SO' at the right border of a half-page, plus an experience of the word 'NO' on the left border of another half page," because that would give you "NOSO" not "SONO."  

So we have a very shaky foundation. We have three supposedly "self-evident axioms" that are not actually self-evident at all. Next we have a section called "Mechanisms" that suddenly starts dogmatizing about three characteristics that would be possessed by a "mechanism that can contribute to consciousness."  The results sounds like extremely dubious metaphysics.  No foundation has been laid establishing that there can be any such thing as a "mechanism that can contribute to consciousness."  

To the contrary, we can imagine no physical "mechanism that can contribute to consciousness."  Consciousness is an immaterial thing, and mechanisms are material things. We can get no plausible idea of how it can be that material things or material mechanisms could "contribute to consciousness."  If I have one neuron existing by itself, there is no reason why such a neuron should "contribute to consciousness." If I have 100 billion neurons that are all connected, there is no reason why such an arrangement should "contribute to consciousness."  If we think that connected neurons should somehow give rise to consciousness, that is only because we have been brainwashed into thinking such a thing by endless repetitions of such a groundless claim.  Similarly, if we had been endlessly told all our lives that consciousness was caused by electron collisions, then we might think that some glass jar with lots of colliding electrons would produce a conscious mind. 

We then have in the paper (under a title of "Systems of Mechanisms") three paragraphs making dogmatic claims such as the claim that "a set of elements can be conscious only if its mechanisms specify a conceptual structure that is irreducible to non-interdependent components (strong integration)." We are deeply mired now in arbitrary metaphysics, as we would be if we were reading a work of G.W. Hegel. Nothing has been done to show that "a set of elements can be conscious," so the writers have no business making such claims.  Organisms are not correctly described as "a set of elements." 

A little later in Box 1 of the paper we have a glossary, which defines more than thirty terms that will be used in the paper.  The glossary is very dense metaphysical gobbeldygook.  An example is the term "cause-effect repertoire" which is defined with this gibberish  definition: "The probability distribution of potential past and future states of a system as constrained by a mechanism in its current state." 

The paper then has a whole bunch of strange diagrams that have many circles, circles within circles, diagram, arrows pointing from one circle to another, and so forth. None of this does anything to clarify how humans have consciousness. 

Below (in italics) are some of the dubious metaphysical claims we read in the paper:

  • "Recall that IIT's information postulate is based on the intuition that, for something to exist, it must make a difference. By extension, something exists all the more, the more of a difference it makes." No, it is not true that for something to exist, it must make a difference. Dust clouds in interstellar space exist, and rocks in the center of distant planets exist, without making any difference. And something does not exist "all the more" depending on the difference it makes. A person with no influence on the world exists just as much as some influential person. 
  • "The integration postulate further requires that, for a whole to exist, it must make a difference above and beyond its partition, i.e. it must be irreducible." No, a whole does not have to be irreducible. A whole consisting of three people can be reduced to three individuals, and a molecule consisting of five atoms can be broken up and reduced to its individual atoms. 
  • "Complexes cannot overlap and at each point in time, an element/mechanism can belong to one complex only." No, complexes can overlap; for example, the brain complex overlaps with the circulatory system in the body. And an element can belong to more than one complex. A blood vessel in the brain belongs to both the brain system and the circulatory system.  
  • "The exclusion postulate at the level of systems of mechanisms says that only a conceptual structure that is maximally irreducible can give rise to consciousness – other constellations generated by overlapping elements are excluded."  Since humans have no understanding at all of how any structure can give rise to consciousness, it is unwarranted to be making some claim with the form "only X can give rise to consciousness."  Describing such a claim as a postulate (an assumption) indicates its weakness. 
  •  "The exclusion postulate requires, first, that only one cause exists. This requirement represents a causal version of Occam's razor, saying in essence that 'causes should not be multiplied beyond necessity', i.e. that causal superposition is not allowed." Occam's razor is not the principle that something cannot have multiple causes. It is the principle that in general we should  prefer a simpler explanation that requires postulating fewer things in order to explain something.  Many things do have multiple causes, and it is dead wrong to claim that causal superposition (assuming multiple causes of a single effect) is not allowed. Very many things do have multiple causes. 
  • "Simple systems can be conscious: a minimally conscious photodiode."  This is a followed by text claiming that a tiny unit called a photodiode is minimally conscious.  Since a modern digital camera contains very many such photodiodes (one for each pixel captured), integrated information theory would seem to predict that every digital camera is substantially conscious -- an idea that is extremely nonsensical.  
No, this is not conscious

Later in the article we have an inaccurate appeal to one of the phoniest myths of neuroscientists: the claim that split brain patients have two different minds. We read this:

"Under special circumstances, such as after split brain surgery, the main complex may split into two main complexes, both having high ΦMax. There is solid evidence that in such cases consciousness itself splits in two individual consciousnesses that are unaware of each other."  

No such evidence exists. A similar bogus claim is made in another article on integrated information theory appearing on the www.integratedinformation.org site (one authored by Giulio Tononi, one of the three authors mentioned above): "It is well established that, after the complete section of the corpus callosum—the roughly 200 million fibers that connect the cortices of the two hemispheres—consciousness is split in two: there are two separate 'flows' of experience, one associated with the left hemisphere and one with the right hemispheres." That claim is untrue. To the contrary, in 2014 the wikipedia.org article on split-brain patients stated the following:

"In general, split-brained patients behave in a coordinated, purposeful and consistent manner, despite the independent, parallel, usually different and occasionally conflicting processing of the same information from the environment by the two disconnected hemispheres...Often, split-brained patients are indistinguishable from normal adults."

In the video here we see a split-brain patient who seems like a pretty normal person, not at all someone with “two minds." And at the beginning of the video here the same patient says that after such a split-brain operation “you don't notice it” and that you don't feel any different than you did before – hardly what someone would say if the operation had produced “two minds” in someone. And the video here about a person with a split brain from birth shows us what is clearly someone with one mind, not two. 

A  scientific study published in 2017 set the record straight on split-brain patients. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.”

The press release states the following: “According to Pinto, the results present clear evidence for unity of consciousness in split-brain patients.” The paper states, “These findings suggest that severing the cortical connections between hemispheres splits visual perception, but does not create two independent conscious perceivers within one brain.”  The recent article here in Psychology Today describes the bizarre experiment that was used to make the groundless claim that split-brain patients have two minds. It was some experiment based only on visual perception, using some strange experimental setup unlike anyone normally encounters. The article shreds to pieces claims that results from such an experiment show that split-brain patients have two minds:

"Not so fast. There are several reasons to question the conclusions Sperry, Gazzaniga, and others sought to draw. First, both split-brain patients and people closest to them report that no major changes in the person have occurred after the surgery. When you communicate with the patient, you never get the sense that the there are now different people living in the patient's head.

This would be very puzzling if the mind was really split. Currently, you are the only conscious person in your neocortex. You consciously perceive your entire visual field, and you control your whole body. However, if your mind splits, this would dramatically change. You would become two people: 'lefty' and 'righty.' 'Lefty' would only see what is in the right visual field and control the right side of the body while 'righty' would see what’s in the left visual field and control the left side of the body. Both 'lefty' and 'righty' would be half-blind and half-paralyzed. It would seem to each of them that another person is in charge of half of the body.

Yet, patients never indicate that it feels as though someone else is controlling half of the body. The patients’ loved ones don’t report noticing a dramatic change in the person after the surgery either. Could we all — patients themselves, their family members, and neutral observers — miss the signs that a single person has been replaced by two people? If you suddenly lost control of half of your body, could you fail to notice? Could you fail to notice if the two halves of your spouse’s or child’s body are controlled by two different minds?"

A 2020 paper states this about split-brain patientis: " Apart from a number of anecdotal incidents in the subacute phase following the surgery, these patients seem to behave in a socially ordinary manner and they report feeling unchanged after the operation (Bogen, Fisher, & Vogel, 1965; Pinto et al., 2017a; R. W. Sperry, 1968; R. Sperry, 1984)."  Misleading statements by neuroscientists are extremely common, and claims by some of them that normal-speaking and normal-acting split-brain patients have two minds (based merely on differing results produced in very weird artificial experimental setups not like real-world cases) is one of the most egregious examples of inaccurate speech by neuroscientists. 

What we have in the "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0" paper is mainly metaphysics following the opague oracular style of Hegel and Heidegger, often careless and poorly reasoned metaphysics. The claim often made that integrated information theory is a "scientific theory of consciousness" is untrue. Integrated information theory is a very errant metaphysical theory that includes a few appeals to scientific observations, to give a little scientific flavor to its gibberish gobbledygook.  The most important reference the theory makes to an alleged scientific observation is a bogus claim that splitting a brain by severing the corpus callosum produces two  minds, something that has never actually been observed, with the actual observations telling us that no such thing occurs. 

Besides inaccurately predicting that split-brain patients should have two minds, integrated information theory inaccurately predicts that "widespread lesions" of the cortex should cause unconsciousness. In the scholarpedia.org article on the theory, Giulio Tononi states this: 

"IT provides a principled and parsimonious way to account for why certain brain regions appear to be essential for our consciousness while others do not. For example, widespread lesions of the cerebral cortex lead to loss of consciousness, and local lesions or stimulations of various cortical areas and tracts can affect its content (for example, the experience of color)."

To the contrary, it is a fact that many epileptic patients with severe seizures underwent hemispherectomy operations in which half of the brain (including half of the cortex) was removed, without any major effect on either consciousness or intelligence.  Many of John's Lorber's patients with good intelligence and normal consciousness had lost most of their cortex.  A French person who held a job as a civil servant was found to have "little more than a thin sheet of actual brain tissue." In the paper here we read on page 1 of a case reported by Martel in 1823 of a boy who after age five lost all of his senses except hearing, and became bed-confined. Until death he 'seemed mentally unimpaired.'  But after he died, an autopsy was done which found that apart from “residues of meninges" there was "no trace of a brain" found inside the skull. This was good consciousness, with little or no cortex. In the same paper we read of a person who had a normal life despite having "very little cortex"  because of hydrocephalus in which brain tissue is replaced by fluid:

"A man was examined because of his headache, and to his physicians' surprise, he had an 'incredibly large' hydrocephalus. Villinger, the director of the Cognitive Neurology Department, stated that this man had 'almost no brain,' only 'a very thin layer of cortical tissue.' This man led an unremarkable life, and his hydrocephalus was only discovered by chance (Hasler, 2016, p. 18)"

Wednesday, January 12, 2022

No, a USC Team Did Not Show "How Memories Are Stored in the Brain"

The EurekAlert site at www.eurekalert.org is yet another "science news" site that seems to just pass on press releases coming from university press offices.  Nowadays university press offices are not a very reliable source of information, as they tend to display all kinds of "local bias" in which the work of researchers at the university gets some adulatory treatment it does not deserve. University press offices often make grandiose claims about research done by professors at their university, fawning or hype-filled claims that are often unwarranted.  The press releases from university press offices often make unimportant or dubious research sound as if it was some type of important breakthrough. 

The EurekAlert site says that it is "a service of the American Association for the Advancement of Science." That makes it sound like we would be getting some kind of "official science news" or at least news of better-than-average reliability. But very strangely at the bottom of each news story on the site, we read this notice: "Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system."  That basically means that we should not trust any headlines we read merely because they appear on the EurekAlert site.  At the post here I discuss various untrue headlines that appeared on the EurekAlert site.

The latest untrue headline to appear at the EurekAlert site is a headline from two days ago, one which stated "USC team shows how memories are stored in the brain, with potential impact on conditions like PTSD." Nothing of the sort occurred.  All that happened was that some scientists tracked some new synapses being created and an equal number of synapses being lost after some tiny zebrafish learned something. 

We read text in the story that contradicts the story's headline:

"They made the groundbreaking discovery that learning causes synapses, the connections between neurons, to proliferate in some areas and disappear in others rather than merely changing their strength, as commonly thought. These changes in synapses may help explain how memories are formed and why certain kinds of memories are stronger than others."

Notice the contradiction. The headline claimed that the team had showed how memories are stored in the brain. But the text of the story merely makes the much weaker claim that the type of thing observed "may help explain how memories are formed." 

The quotation above is not even an accurate description of what was observed.  The scientists did not find that synapses "proliferate in some areas and disappear in others."  Instead what was observed in each area of the zebrafish brain studied was a roughly equal number of gains of synapses and losses of synapses. Below is one of the visuals from the paper (from the page here and the site here). It shows synapses losses and gains in only one tiny part of the zebrafishes brain during a small time period. Notice the blue dots (representing synapse losses) are roughly as common as the yellow dots (representing synapse gains). 

Data results such as this are best interpreted under the hypothesis that we are merely seeing random losses and gains of synapses that continually occur, and that the result has nothing to do with anything being learned.  It has long been known that synapses are short-lived things.  The paper here states, "Experiments indicate in absence of activity average life times ranging from minutes for immature synapses to two months for mature ones with large weights."  Synapses randomly appear and disappear, just as pimples randomly appear and disappear on the face of a teenager with a bad case of acne. 

Zebrafish have only about 100,000 neurons, and there are perhaps 1000 synapses for every neuron. That makes very roughly about 100 million synapses in the zebrafish brain. Given synapses that have average lifetimes of no greater than a few months, we would expect that every hour about 100,000 synapses in the zebrafish brain would randomly be lost or would randomly appear.  The synapse loss and gain shown in the USC data is about what we would expect under such numbers. The visual shows hundreds of synapse losses and gains, but this visual only maps such losses and gains in a tiny portion of the zebrafish brain. 

The type of learning tested on the zebrafish was something called tail-flick conditioning or TFC. At the link here we are told this:

"The total numbers of synapses before TFC are not significantly different among the different groups: superlative learner (L, N=11 fish), partial learner (PL, N = 6), nonlearner (NL, N=11), US only (N=11), NS (N=11), and CS only (N=11) (p > 0.3, Kruskal Wallis).
B The total numbers of synapses after TFC are not significantly different among the different groups (p > 0.3, Kruskal Wallis)."

So there was no increase in synapses for the zebrafish who learned something (the L and PL groups) compared to the zebrafish who did not learn anything (the NL group).  The study has not produced any evidence that learning or memory formation produces an increase in synapses.  

The study also failed to support the widely-made claim that synapses strengthen during memory formation or learning. In the EurekAlert story we read this:

" 'For the last 40 years the common wisdom was that you learn by changing the strength of the synapses,' said Kesselman, who also serves as director of the Informatics Division at the USC Information Sciences Institute and is a professor with the Daniel J. Epstein Department of Industrial and Systems Engineering, 'but that’s not what we found in this case.' ” 

Oops, it sounds like neuroscientists have been telling us baloney for the past 40 years by trying to claim that memories are formed by synapse strengthening (an idea that never made any sense, because information is never stored by a mere strengthening of something). The USC scientists have not presented anything that can serve as a credible substitute narrative.  "Synapses being lost at the same rate as synapses being gained" makes no sense as a narrative of how memories could be stored, just as "words being written at the same rate as words being erased" makes no sense as a description of how someone could write a book using pencil and paper. 

By visually diagramming the high turnover rate of synapses, and by reminding us of the short lifetimes and rapid turnover of synapses, what the USC study really has done is to highlight a major reason for rejecting all claims that human memories are stored in synapses.  Synapses only last for days, weeks or months, not years; and the proteins that make up synapses have average lifetimes of only a few weeks or less. But human memories often last for 50 years or more.  It makes no sense to believe that human memories that can last for 50 years are stored in synapses which last a few months at best, and which internally are subject to constant remodeling and restructuring because of the short lifetimes of synapse proteins. 

In an article he wrote at the site The Conversation, USC scientist Dan Arnold describes his own results in a give-you-the-wrong-idea way, stating the following: "When we compared the 3D synapse maps before and after memory formation, we found that neurons in one brain region, the anterolateral dorsal pallium, developed new synapses while neurons predominantly in a second region, the anteromedial dorsal pallium, lost synapses."  The results (as shown here) were actually that both regions lost and gained synapses at roughly equal rates. Arnold confesses, "It’s still unknown whether synapse generation and loss actually drive memory formation." 

So why is it that the press release for this work contained the untrue headline "USC team shows how memories are stored in the brain, with potential impact on conditions like PTSD"? Why is it that scientists so very often allow very untrue press releases about their work to be issued by the press offices of their universities, press releases that are making claims that are not supported by their work, and claims that often contradict the statements of such scientists? Is it because scientists are willing to condone lying hype to appear about their work, for the sake of getting more of the paper citations that scientists so much long for (the paper citation count being for a scientist being a number as important as a baseball player's batting average)? 

A particularly pathetic aspect of the phony press release headline is that this counting of synapse losses and synapses gains in tiny zebrafish has a "potential impact on conditions like PTSD."  Such research has no relevance to humans with post-traumatic stress syndome, and the claim that it does is as phony as the claim that the study "shows how memories are stored." 

Wednesday, January 5, 2022

Suspect Shenanigans When You Hear Claims of "Mind Reading" Technology

 The New Yorker recently published an extremely misleading article with a title of "The Science of Mind Reading," and with a subtitle of "Researchers are pursuing age-old questions about the nature of thoughts—and learning how to read them." The article (not written by a neuroscience scholar) provides no actual evidence that anyone is making progress trying to read thoughts from a brain. 

The article starts out with a dramatic-sounding but extremely dubious narrative. We hear of experts trying to achieve communication with a Patient 23 who was assumed to be in a "vegetative state" after a bad injury five years ago.  We read about the experts asking questions while scanning the patient's brain.  They were looking for some brain signals that could be interpreted as a "yes" answer or "no" answer.  We are told: "They would pose a question and tell him that he could signal 'yes' by imagining playing tennis, or 'no' by thinking about walking around his house." 

We get this narrative (I will put unwarranted and probably untrue statements in boldface):

"Then he asked the first question: 'Is your father’s name Alexander''

The man’s premotor cortex lit up. He was thinking about tennis—yes.

'Is your father’s name Thomas?'

Activity in the parahippocampal gyrus. He was imagining walking around his house—no.

'Do you have any brothers?'

Tennis—yes.

'Do you have any sisters?'

House—no."

Constantly foisted upon us by scientists and science writers, the claim that particular regions of the brain "light up" under brain scanning is untrue. Such claims are visually enforced by extremely deceptive visuals in which tiny differences of less than 1 percent are shown in bright red, thereby causing people to think the very slight differences are major differences. The truth is that all brain regions are active all the time. When a brain is scanned, there are only tiny signal differences that show up in a brain scan.  Typically the differences will be no greater than about half of one percent, smaller than 1 part in 200.  When scanning a brain, you can always see dozens of little areas that have a very slightly greater activity, and there is no reason to think that such variations are anything more than very slight chance variations. Similarly, if you were to analyze the blood flow in someone's foot, you would find random small variations in blood flow between different regions, with differences of about 1 part in 200. 

Because of such random variations, there would never be any warrant for claiming that a person was thinking about a particular thing based on small fluctuations in brain activity. At any moment there might for random reasons be 100 different little areas in the brain that had 1 part in 200 greater activity, and 100 other different little areas in the brain that might have 1 part in 200 less activity.  In this case no evidence has been provided of any ability to read thoughts of a person supposed to be in a vegetative state. We cannot reliably distinguish any signal from the noise. 

The New Yorker article describing the case above refers us to a Los Angeles Times article entitled "Brains of Vegetative Patients Show Signs of Life." The article gives us no good evidence that thoughts were read from this patient 23. The article merely mentions that 54 patients in a vegetative state had their brains scanned, and that one of them (patient 23) seemed "several times" to answer "yes" or "no" correctly, based on examining fluctuations of brain activity.  Given random variations in brain activity, you would expect to get such a result by chance if you scanned 54 patients who were completely unconscious. So no evidence of either consciousness or thought reading has been provided.  

A look at the corresponding scientific paper  shows that the fluctuations in brain activity were no more than about a half of one percent. No paper like this should be taken seriously unless the authors followed a rigorous blinding protocol, but the paper makes no mention of any blinding protocol being followed.  Under a blinding protocol, anyone looking for signs of a "yes" or "no" answer would not know whether a "yes" answer was the correct answer.  The paper provides no actual evidence either of thought reading by brain scanning or even of detection of consciousness. We merely have tiny 1-part-in-200 signal variations of a type we would expect to get by chance from scanning one or more of 54 patients who are all unconscious.  

The paper tells that six questions were asked, and the authors seemed impressed that one of the 54 patients seemed to them to answer all six questions correctly (by means of brain fluctuations that the authors are subjectively interpreting).  The probablility of getting six correct answers to yes-or-no questions by a chance method such as coin-flipping is 1 in two-to-the-sixth-power, or 1 in 64.  So it is not very unlikely at all that you would get one such result testing 54 patients, purely by chance, even if all of the patients were unconscious and none of them understood the instructions they were given.  

The New Yorker article then introduces Princeton scientist Ken  Norman, incorrectly describing him as "an expert on thought decoding." Because no progress has been made on decoding thoughts from studying brains, no one should be described as an expert on such a thing. The article then gives us a very misleading passage trying to suggest that scientists are making some progress in understanding how a brain could produce or represent thoughts:

"Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense 'meaning space.' They could see how these points were interrelated and encoded by neurons." 

To the contrary, no neuroscientist has the slightest idea of how thoughts could be encoded by neurons, nor have neuroscientists  discovered any evidence that any neurons encode thoughts. It is nonsensical to claim that thoughts can be compared to points in three-dimensional space. Points in three-dimensional space are simple 3-number coordinates, but thoughts can be vastly more complicated. If I have the thought that I would love to be lounging on a beach during sunset while sipping lemonade, there is no way to express that thought as three-dimensional coordinates. 

We then read about some experiment:

"Norman invited me to watch an experiment in thought decoding. A postdoctoral student named Manoj Kumar led us into a locked basement lab at P.N.I., where a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest. 'We want to get the brain patterns that are associated with different subclasses of scenes,' Norman said." 

But then the article goes into a long historical digression, and we never learn of what the result is from this experiment. Norman is often mentioned, but we hear no mention of any convincing work he has done on this topic. Inaccurately described as "thought decoding," the attempt described above is merely an attempt to pick up signs in the brain of visual perception. Seeing something is not thinking about it. Most of the alleged examples of high-tech "mind reading" are merely claimed examples of picking up traces of vision by looking at brains -- examples that are not properly called "mind reading" (a term that implies reading someone's thoughts).

We hear a long discussion often mentioning Ken Norman, but failing to prevent any good evidence of high-tech mind reading. We read this claim about brain imaging: "The scripts and the scenes were real—it was possible to detect them with a machine." But the writer presents no evidence to back up such a claim. 

Norman is a champion of a very dubious analytical technique called multi-voxel pattern analysis (MVPA), and seems to think such a technique may help read thoughts from the brain. A paper points out problems with such a technique:

"MVPA does not provide a reliable guide to what information is being used by the brain during cognitive tasks, nor where that information is. This is due in part to inherent run to run variability in the decision space generated by the classifier, but there are also several other issues, discussed here, that make inference from the characteristics of the learned models to relevant brain activity deeply problematic." 

In a paper, Norman claims "This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading."  Looking up two of the papers cited in support of this claim, I see that only four subjects were used in each study.  Looking up another of the studies cited in support of this claim, I find that only five subjects were used for the experiment cited. This means none of these studies provided robust evidence (15 subjects per study group being the minimum for a moderately reliable result). This is what goes on massively in neuroscience papers: authors making claims that other papers showed some thing that the papers did not actually show, because poor methodology (usually including way-too-small sample sizes) occurred in the cited studies.   

The New Yorker article then discusses a neuroscientist named Jack Gallant, stating the following: "Jack Gallant, a professor at Berkeley who has used thought decoding to reconstruct video montages from brain scans—as you watch a video in the scanner, the system pulls up frames from similar YouTube clips, based only on your voxel patterns—suggested that one group of people interested in decoding were Silicon Valley investors."  Gallant has produced a Youtube.com clip entitled "Movie Reconstruction from Human Brain Activity."

On the left side of the video we see some visual images. On the right side of the video we see some blurry images entitled "Clip reconstructed from brain activity."  We are left with the impression that scientists have somehow been able to get "movies in the mind" by scanning brains. 

However, such an impression is very misleading, and what is going on smells like smoke and mirrors shenanigans.  The text below the video explains the funky technique used.  The videos entitled "clip reconstructed from brain activity" were produced through some extremely elaborate algorithm that mainly used inputs other than brain activity. Here is the description of the technique used:

"[1] Record brain activity while the subject watches several hours of movie trailers. [2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured....[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions. [4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction."

This bizarre and very complicated rigmarole is some very elaborate scheme in which brain activity is only one of the inputs, and the main inputs are lots of footage from Youtube videos.  It is very misleading to identify the videos as "clip reconstructed from brain activity," as the clips are mainly constructed from data other than brain activity. No actual evidence has been produced that someone detected anything like "movies in the brain." It seems like merely smoke and mirrors under which some output from a variety of sources (produced by a ridiculously complicated process) is being passed off as something like "movies in the brain." 

Similar types of extremely dubious convoluted methods seem to be going on in the papers here co-authored by Gallant:

In both of these papers, we have a kind of byzantine methodology in which bizarre visual montages or artificial video clips are constructed. For example, the second paper resorts to "an averaged high posterior (AHP) reconstruction by averaging the 100 clips in the sampled natural movie prior that had the highest posterior probability." The claim made by the New Yorker -- that Gallant has "used thought decoding to reconstruct video montages from brain scans" is incorrect. Instead, Gallant is constructing visual montages using some extremely elaborate and hard-to-justify methodology (the opposite of straightforward), and brain scans are merely one of many inputs from which such montages are constructed.  This is no evidence of technology reading thoughts or imagery from brains.  In both of the papers above, only three subjects were used. 15 subjects per study group is the minimum for a moderately compelling experimental result. And since neither paper uses a blinding protocol, the papers fail to provide robust evidence of anything. 

The rest of the New Yorker article is mainly something along the lines of "well, if we've made this much progress, what wonderful things may be on the horizon?" But no robust evidence has been provided that any progress has been made in reading thoughts or mental imagery from brains. The author has spent quite a while interviewing and walking around with scientist Ken Norman, and has accepted "hook, line and sinker" all the claims Norman has made, without asking any tough questions, and without critically analyzing the lack of evidence behind his more doubtful claims and the dubious character of the methodologies involved. The article is written by a freelance writer who has written on a very wide variety of topics, and who shows no signs of being a scholar of neuroscience or the brain or philosophy of mind issues.  

There are no strong neural correlates of either thinking or recall. As discussed here, brain scan studies looking for neural correlates of thinking or recall find only very small differences in brain activity, typically smaller than 1 part in 200. Such differences are what we would expect to see from chance variations, even if a brain does not produce thinking and does not produce recall.  The chart below illustrates the point. 

neural correlates of thinking

What typically goes on in some study claiming to find some neural correlate of thinking or recall is professor pareidolia. Pareidolia is when someone hoping to find some pattern reports a pattern that isn't really there, like someone eagerly scanning his toast each day for  years until he finally reports finding something that looks to him like the face of Jesus. A professor examining brain scans and eagerly hoping to find some neural signature or correlate of thinking or recall may be as prone to pareidolia as some person scanning the clouds each day eagerly hoping to find some shape that looks like an angel. 

There are ways for scientists to help minimize the chance that they are reporting patterns because of pareidolia. One way is the application of a rigorous blinding protocol throughout an experiment. Another way is to use adequate sample sizes such as 15 or 30 subjects per study group. Most neuroscience experiments fail to follow such standards. The shockingly bad tendencies of many  experimental biologists was recently revealed by a replication project that found a pitifully low replication rate and other severe problems in a group of biology experiments chosen to be replicated.

Postscript: The latest example of needless risk to subjects for the sake of unfounded "mind reading by brain scanning" claims is a study with a preprint entitled "Semantic reconstruction of continuous language from non-invasive brain recordings." The study failed to show any good evidence for anything important, as it used a way too-small study group sizes of only three subjects and seven subjects (15 subjects per study group is the minimum for a moderately impressive result). Following Questionable Research Practices, the scientists report no sample size calculation, no blinding protocol, no pre-registration, no control group, and no effect size. The only "statistical significance" reported is what smells like "p-hacking" kind of results of the bare minimum for publication (merely p < .05). For these basically worthless results, seven subjects endured something like 16 hours of brain scanning with a 3T scanner, which is more than 30 times longer than they would have had for a diagnostic MRI.  Senselessly, this study has been reported by our ever-credulous science press as some case of reading thoughts by brain scanning. It is no evidence of any such thing.