Wednesday, January 19, 2022

Integrated Information Theory's Tangled Metaphysics Does Nothing to Explain Consciousness

A theory called "integrated information theory" purports to be a theory of consciousness. We should always be suspicious of any theory claiming to be a "theory of consciousness."  "Consciousness" is the most reductive term you could use to describe human minds and human mental experience.  A person trying to explain  a human mind by advancing what he calls a "theory of consciousness" is rather like a person trying to explain planet Earth by advancing what he calls a "theory of roundness." Just as roundness is only one aspect of planet Earth,  consciousness is only one aspect of the human mind and human mental experience. What we need is not a "theory of consciousness" but something very much harder to create: a theory of mentality that includes all of the main aspects of human mentality (including consciousness, comprehension, thinking, memory, imagination and creativity). 

When I go a website devoted to selling integrated information theory (www.integratedinformationtheory.org), I get a home page that has at its first link a link to a paper behind a paywall. But the second link is to a paper that anyone can read. Let's take a close look at that paper, entitled, "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0," and authored by Masafumi Oizumi, Larissa Albantakis, and Giulio Tononi . 

The abstract of the paper should leave us very discouraged about integrated information theory:

"This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as 'differences that make a difference' within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true 'zombies' – unconscious feed-forward systems that are functionally equivalent to conscious complexes."

We have heard in this abstract no sign that any compelling reasoning will appear in the paper. To the contrary, we have got two signals that the paper will be pushing nonsense. The first signal is the  absurd insinuation that logic gates (a low-level building block of a digital system) can be somehow configured to generate conscious experience.  The second signal is the claim that "simple systems can be minimally conscious."  There are minimally conscious organisms on our planet, but none of them are simple systems. When we consider all of the complexity of its cells, each as complex as a factory, we should realize that even the simplest maybe-barely-conscious ant is not at all a simple system. 

After the paper asks a bunch of questions, in the section entitled "Models" we read this: "The main tenets of IIT can be presented as a set of phenomenological axioms, ontological postulates, and identities." That sounds like metaphysics, not anything like a scientific theory. 

After the paper defines an "axiom" as a self-evident truth, we read some "axiom" defined by the paper.  One of these "axioms" is listed as follows:

"COMPOSITION: Consciousness is compositional (structured): each experience consists of multiple aspects in various combinations. Within the same experience, one can see, for example, left and right, red and blue, a triangle and a square, a red triangle on the left, a blue square on the right, and so on."

It is not true that "each experience consists of  multiple aspects in various combinations," although many experiences do consist of such a thing. A person can have a simple experience consisting of a single aspect. For example, you may lie on a beach looking up at a clear blue sky, while thinking of nothing. Such consciousness has only one aspect: your perception of the blueness above you.  Similarly, while waiting to fall asleep at night with your eyes closed, you may perceive nothing and be thinking of nothing.  Such an experience does not consist of  "multiple aspects in various combinations."

We then read this "axiom":

"INFORMATION: Consciousness is informative: each experience differs in its particular way from other possible experiences. Thus, an experience of pure darkness is what it is by differing, in its particular way, from an immense number of other possible experiences. A small subset of these possible experiences includes, for example, all the frames of all possible movies." 

No, it is not correct that "consciousness is informative." Something is informative if it supplies information.  Consciousness by itself does not supply information. A conscious person may or not be involved in supplying information.  

We then read this "axiom":

"INTEGRATION: Consciousness is integrated: each experience is (strongly) irreducible to non-interdependent components. Thus, experiencing the word 'SONO' written in the middle of a blank page is irreducible to an experience of the word 'SO' at the right border of a half-page, plus an experience of the word 'NO' on the left border of another half page – the experience is whole. Similarly, seeing a red triangle is irreducible to seeing a triangle but no red color, plus a red patch but no triangle."

The word "integrated" means "with various parts or aspects linked or coordinated."  The human mind may be thought of as being integrated (for example, consciousness is linked with memory and understanding). But mere consciousness is not intrinsically integrated. At some moment I may be aware of the blue sky ahead of me, but such awareness does not consist of multiple parts.  An experience does not have to consist of multiple parts.  As for the logic about "SONO" written on a blank page, of course that is "irreducible to an experience of the word 'SO' at the right border of a half-page, plus an experience of the word 'NO' on the left border of another half page," because that would give you "NOSO" not "SONO."  

So we have a very shaky foundation. We have three supposedly "self-evident axioms" that are not actually self-evident at all. Next we have a section called "Mechanisms" that suddenly starts dogmatizing about three characteristics that would be possessed by a "mechanism that can contribute to consciousness."  The results sounds like extremely dubious metaphysics.  No foundation has been laid establishing that there can be any such thing as a "mechanism that can contribute to consciousness."  

To the contrary, we can imagine no physical "mechanism that can contribute to consciousness."  Consciousness is an immaterial thing, and mechanisms are material things. We can get no plausible idea of how it can be that material things or material mechanisms could "contribute to consciousness."  If I have one neuron existing by itself, there is no reason why such a neuron should "contribute to consciousness." If I have 100 billion neurons that are all connected, there is no reason why such an arrangement should "contribute to consciousness."  If we think that connected neurons should somehow give rise to consciousness, that is only because we have been brainwashed into thinking such a thing by endless repetitions of such a groundless claim.  Similarly, if we had been endlessly told all our lives that consciousness was caused by electron collisions, then we might think that some glass jar with lots of colliding electrons would produce a conscious mind. 

We then have in the paper (under a title of "Systems of Mechanisms") three paragraphs making dogmatic claims such as the claim that "a set of elements can be conscious only if its mechanisms specify a conceptual structure that is irreducible to non-interdependent components (strong integration)." We are deeply mired now in arbitrary metaphysics, as we would be if we were reading a work of G.W. Hegel. Nothing has been done to show that "a set of elements can be conscious," so the writers have no business making such claims.  Organisms are not correctly described as "a set of elements." 

A little later in Box 1 of the paper we have a glossary, which defines more than thirty terms that will be used in the paper.  The glossary is very dense metaphysical gobbeldygook.  An example is the term "cause-effect repertoire" which is defined with this gibberish  definition: "The probability distribution of potential past and future states of a system as constrained by a mechanism in its current state." 

The paper then has a whole bunch of strange diagrams that have many circles, circles within circles, diagram, arrows pointing from one circle to another, and so forth. None of this does anything to clarify how humans have consciousness. 

Below (in italics) are some of the dubious metaphysical claims we read in the paper:

  • "Recall that IIT's information postulate is based on the intuition that, for something to exist, it must make a difference. By extension, something exists all the more, the more of a difference it makes." No, it is not true that for something to exist, it must make a difference. Dust clouds in interstellar space exist, and rocks in the center of distant planets exist, without making any difference. And something does not exist "all the more" depending on the difference it makes. A person with no influence on the world exists just as much as some influential person. 
  • "The integration postulate further requires that, for a whole to exist, it must make a difference above and beyond its partition, i.e. it must be irreducible." No, a whole does not have to be irreducible. A whole consisting of three people can be reduced to three individuals, and a molecule consisting of five atoms can be broken up and reduced to its individual atoms. 
  • "Complexes cannot overlap and at each point in time, an element/mechanism can belong to one complex only." No, complexes can overlap; for example, the brain complex overlaps with the circulatory system in the body. And an element can belong to more than one complex. A blood vessel in the brain belongs to both the brain system and the circulatory system.  
  • "The exclusion postulate at the level of systems of mechanisms says that only a conceptual structure that is maximally irreducible can give rise to consciousness – other constellations generated by overlapping elements are excluded."  Since humans have no understanding at all of how any structure can give rise to consciousness, it is unwarranted to be making some claim with the form "only X can give rise to consciousness."  Describing such a claim as a postulate (an assumption) indicates its weakness. 
  •  "The exclusion postulate requires, first, that only one cause exists. This requirement represents a causal version of Occam's razor, saying in essence that 'causes should not be multiplied beyond necessity', i.e. that causal superposition is not allowed." Occam's razor is not the principle that something cannot have multiple causes. It is the principle that in general we should  prefer a simpler explanation that requires postulating fewer things in order to explain something.  Many things do have multiple causes, and it is dead wrong to claim that causal superposition (assuming multiple causes of a single effect) is not allowed. Very many things do have multiple causes. 
  • "Simple systems can be conscious: a minimally conscious photodiode."  This is a followed by text claiming that a tiny unit called a photodiode is minimally conscious.  Since a modern digital camera contains very many such photodiodes (one for each pixel captured), integrated information theory would seem to predict that every digital camera is substantially conscious -- an idea that is extremely nonsensical.  
No, this is not conscious

Later in the article we have an inaccurate appeal to one of the phoniest myths of neuroscientists: the claim that split brain patients have two different minds. We read this:

"Under special circumstances, such as after split brain surgery, the main complex may split into two main complexes, both having high ΦMax. There is solid evidence that in such cases consciousness itself splits in two individual consciousnesses that are unaware of each other."  

No such evidence exists. A similar bogus claim is made in another article on integrated information theory appearing on the www.integratedinformation.org site (one authored by Giulio Tononi, one of the three authors mentioned above): "It is well established that, after the complete section of the corpus callosum—the roughly 200 million fibers that connect the cortices of the two hemispheres—consciousness is split in two: there are two separate 'flows' of experience, one associated with the left hemisphere and one with the right hemispheres." That claim is untrue. To the contrary, in 2014 the wikipedia.org article on split-brain patients stated the following:

"In general, split-brained patients behave in a coordinated, purposeful and consistent manner, despite the independent, parallel, usually different and occasionally conflicting processing of the same information from the environment by the two disconnected hemispheres...Often, split-brained patients are indistinguishable from normal adults."

In the video here we see a split-brain patient who seems like a pretty normal person, not at all someone with “two minds." And at the beginning of the video here the same patient says that after such a split-brain operation “you don't notice it” and that you don't feel any different than you did before – hardly what someone would say if the operation had produced “two minds” in someone. And the video here about a person with a split brain from birth shows us what is clearly someone with one mind, not two. 

A  scientific study published in 2017 set the record straight on split-brain patients. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.”

The press release states the following: “According to Pinto, the results present clear evidence for unity of consciousness in split-brain patients.” The paper states, “These findings suggest that severing the cortical connections between hemispheres splits visual perception, but does not create two independent conscious perceivers within one brain.”  The recent article here in Psychology Today describes the bizarre experiment that was used to make the groundless claim that split-brain patients have two minds. It was some experiment based only on visual perception, using some strange experimental setup unlike anyone normally encounters. The article shreds to pieces claims that results from such an experiment show that split-brain patients have two minds:

"Not so fast. There are several reasons to question the conclusions Sperry, Gazzaniga, and others sought to draw. First, both split-brain patients and people closest to them report that no major changes in the person have occurred after the surgery. When you communicate with the patient, you never get the sense that the there are now different people living in the patient's head.

This would be very puzzling if the mind was really split. Currently, you are the only conscious person in your neocortex. You consciously perceive your entire visual field, and you control your whole body. However, if your mind splits, this would dramatically change. You would become two people: 'lefty' and 'righty.' 'Lefty' would only see what is in the right visual field and control the right side of the body while 'righty' would see what’s in the left visual field and control the left side of the body. Both 'lefty' and 'righty' would be half-blind and half-paralyzed. It would seem to each of them that another person is in charge of half of the body.

Yet, patients never indicate that it feels as though someone else is controlling half of the body. The patients’ loved ones don’t report noticing a dramatic change in the person after the surgery either. Could we all — patients themselves, their family members, and neutral observers — miss the signs that a single person has been replaced by two people? If you suddenly lost control of half of your body, could you fail to notice? Could you fail to notice if the two halves of your spouse’s or child’s body are controlled by two different minds?"

A 2020 paper states this about split-brain patientis: " Apart from a number of anecdotal incidents in the subacute phase following the surgery, these patients seem to behave in a socially ordinary manner and they report feeling unchanged after the operation (Bogen, Fisher, & Vogel, 1965; Pinto et al., 2017a; R. W. Sperry, 1968; R. Sperry, 1984)."  Misleading statements by neuroscientists are extremely common, and claims by some of them that normal-speaking and normal-acting split-brain patients have two minds (based merely on differing results produced in very weird artificial experimental setups not like real-world cases) is one of the most egregious examples of inaccurate speech by neuroscientists. 

What we have in the "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0" paper is mainly metaphysics following the opague oracular style of Hegel and Heidegger, often careless and poorly reasoned metaphysics. The claim often made that integrated information theory is a "scientific theory of consciousness" is untrue. Integrated information theory is a very errant metaphysical theory that includes a few appeals to scientific observations, to give a little scientific flavor to its gibberish gobbledygook.  The most important reference the theory makes to an alleged scientific observation is a bogus claim that splitting a brain by severing the corpus callosum produces two  minds, something that has never actually been observed, with the actual observations telling us that no such thing occurs. 

Besides inaccurately predicting that split-brain patients should have two minds, integrated information theory inaccurately predicts that "widespread lesions" of the cortex should cause unconsciousness. In the scholarpedia.org article on the theory, Giulio Tononi states this: 

"IT provides a principled and parsimonious way to account for why certain brain regions appear to be essential for our consciousness while others do not. For example, widespread lesions of the cerebral cortex lead to loss of consciousness, and local lesions or stimulations of various cortical areas and tracts can affect its content (for example, the experience of color)."

To the contrary, it is a fact that many epileptic patients with severe seizures underwent hemispherectomy operations in which half of the brain (including half of the cortex) was removed, without any major effect on either consciousness or intelligence.  Many of John's Lorber's patients with good intelligence and normal consciousness had lost most of their cortex.  A French person who held a job as a civil servant was found to have "little more than a thin sheet of actual brain tissue." In the paper here we read on page 1 of a case reported by Martel in 1823 of a boy who after age five lost all of his senses except hearing, and became bed-confined. Until death he 'seemed mentally unimpaired.'  But after he died, an autopsy was done which found that apart from “residues of meninges" there was "no trace of a brain" found inside the skull. This was good consciousness, with little or no cortex. In the same paper we read of a person who had a normal life despite having "very little cortex"  because of hydrocephalus in which brain tissue is replaced by fluid:

"A man was examined because of his headache, and to his physicians' surprise, he had an 'incredibly large' hydrocephalus. Villinger, the director of the Cognitive Neurology Department, stated that this man had 'almost no brain,' only 'a very thin layer of cortical tissue.' This man led an unremarkable life, and his hydrocephalus was only discovered by chance (Hasler, 2016, p. 18)"

No comments:

Post a Comment