Sunday, June 1, 2025

"Consciousness Theory Showdown" Shows Mainly Shady Neuroscientist Sleight-of-Hand

When talking about the problem of explaining human minds, those in academia love to use the term "problem of consciousness." But is is a huge fallacy to think there is merely some "problem of consciousness" when there is a trillion times bigger "problem of explaining human minds, human mental capabilities and human mental phenomena."  Once you realize this, you may realize that presenting some "theory of consciousness" can never do much to solve the explanatory problems in the philosophy of mind, which are huge and "all over the place."  

consciousness babbling

The trick of posing a mere "problem of consciousness" is a ridiculous ruse. A human being is not merely "some consciousness." A human being is an enormously complex reality, and the mental reality is as complex as the physical reality.  You dehumanize and degrade human beings when you refer to their minds as mere "consciousness." The problem of human mentality is the problem of credibly explaining the forty or fifty most interesting types of human mental experiences, human mental characteristics and human mental capabilities.

It is always a silly, stupid trick when someone tries to reduce so complex a reality to try and make it sound like the faintest shadow of what it is, by speaking as if there is a mere "problem of consciousness," and talking as if humans are just "some consciousness" that needs to be explained.  Such a shabby, pathetic trick (which can be called consciousness shadow-speaking) is as silly as ignoring the vast complexity of the organization of the human body, and speaking as if explaining the origin of human bodies is just a task of explaining how there might occur "some carbon concentrations." 

The person attempting so pathetic a trick is acting as silly as a person who stands at the seashore, fills a glass with seawater, and says, "Oceans are easy to explain -- they're just water."  Just as the ocean includes trillions of deep, baffling complexities such as all of the organization and biochemistry of sea creatures -- something infinitely more complex than mere water -- the human mind and human mental experiences involve trillions of complexities, and such a reality is something almost infinitely more complex than mere "consciousness."  The reductionist who engages in consciousness shadow-speaking is someone engaging in a trick as misleading as someone who says, "Mathematics is real simple -- it's just counting." 

consciousness misspeaking

The majority of people who try to reduce the mountain-sized problem of explaining human minds and human mental experiences in all their variety into the mouse-sized problem of explaining some mere dry abstraction of "consciousness"  are people who were too lazy to very deeply study minds and brains, and who used this stupid trick of consciousness shadow-speaking to try to make their explanation job a million times easier.  People who lack credible explanations for very complex realities (whether physical or mental) love to use poorly descriptive language in which they try to make the complex realities sound a million times simpler than they are. 

The dialog below illustrates the stupidity of trying to explain human minds by describing a human mind as mere "consciousness" and then trying to create a "theory of consciousness" that applies to everything conscious. 

James: John, I've made great progress in explaining how the human body arises during a mother's pregnancy.

John: Great, tell me about it.

James: I call my explanation a “theory of solidity.”

John: A theory of solidity?

James: Yes, because that's the essential nature of human bodies, that they are solid. So my theory attempts to explain how solidity arises.

John: I think you've gone in the wrong direction, and made a big mistake.

James: Why?

John: Because a human body is something gigantically greater than mere “solidity.” A human body is a state of vast hierarchical organization, with a oceanic level of functional complexity. For example, in our bodies are 20,000 different types of protein inventions, most very special arrangements of many thousands of atoms. And we have 200 types of cells, each so complex they are compared to factories. You would do nothing to explain so impressive a reality of physical organization by merely explaining “solidity.” Your body is something gigantically more than mere “solidity.”


James: John, I've made great progress in explaining how the human mind arises.

John: Great, tell me about it.


James: I call my explanation a “theory of consciousness.”

John: A theory of consciousness?

James: Yes, because that's the essential nature of human minds, that they are conscious. So my theory attempts to explain how consciousness arises.

John: I think you've gone in the wrong direction, and made a big mistake.

James: Why?

John: Because a human mind is something gigantically greater than mere “consciousness.” You and I are not merely “some consciousness.” We are thinking, believing, seeing, reading, hearing, loving imagining minds with insight, emotions, viewpoints, and a great variety of mental powers such as instant learning ability, the ability to hold memories for decades, and the ability to instantly recall knowledge when only hearing a word or seeing a face. Human minds and human mental experiences are a reality of oceanic depth, so much more than mere “consciousness.”

A recent study attempted to do a "showdown" between two different theories called "theories of consciousness," in an attempt to reveal a winner and a loser. The experimental study should be regarded with the greatest of suspicion, because of all the suspicions we should have about any thing at all calling itself a "theory of consciousness." An experimental showdown between two different theories calling themselves a "theory of consciousness" is rather like trying to do an experimental showdown between the theory of palm-reading and the theory of astrology. The paper is entitled "Adversarial testing of global neuronal workspace and integrated information theories of consciousness."

The paper makes quite a few dubious claims about predictions made by one or the other of these theories.  We should treat with suspicion claims made about what is predicted by either of these theories (the global workspace theory and the integrated information theory). Scientists often make unwarranted claims that this or that theory predicts something. Often such claims are made to try to achieve some aura of predictive success for some theory. It works like this:

(1) A scientist claims that some theory he favors predicts the observation of X. 

(2) The scientist then tries to show that X was observed. 

(3) The scientist then says we should have confidence in the theory because it made a successful prediction. 

Very often this is misleading in one way or another. The claim that the theory predicted the observation of X may be untrue. The claim that X was observed may be untrue. And just because some theory predicts something does not mean the theory is true or likely to be true. There are all kinds of false theories that may predict 1001 things, and some of those things may be true. 

We read in the paper claims about predictions of two rival theories of consciousness:

"We tested three preregistered, peer-reviewed predictions of IIT and GNWT for how the brain enables conscious experience (Fig. 1a). Prediction 1 addresses the cortical areas holding information about different aspects of conscious content. IIT predicts that conscious content is maximal in posterior brain areas, whereas GNWT predicts a necessary role for PFC. Prediction 2 pertains to the maintenance of conscious percepts over time. IIT predicts that conscious content is actively maintained by neural activity in the posterior ‘hot zone’ throughout the duration of a conscious experience, whereas GNWT predicts ignition events in PFC at stimulus onset and offset, updating the global workspace, with activity-silent information maintenance in between. Prediction 3 examines interareal connectivity during conscious perception. IIT predicts sustained short-range connectivity within the posterior cortex, linking low-level sensory (V1/V2) with high-level category-selective areas (for example, fusiform face area and lateral occipital cortex), whereas GNWT predicts long-range connectivity between high-level category-selective areas and PFC."

We should treat with skepticism all of these claims that such statements are actually predictions of such theories, and we should note that none of the claimed "predictions" qualify as precise predictions or exact numerical predictions. The claimed "predictions" are wooly kind of statements that are vague enough to be claimed as true no matter what is observed. Also the claimed "predictions" are not clearly at odds with each other, meaning you don't actually have a situation which is suitable for doing observations and announcing that one of the theories is the winner and the other the loser. 

To perform this dubious "showdown" of these two theories of consciousness, a large number of subjects had their brains scanned in fMRI machines, and another group had their eyes scanned while their brain waves were read using invasive brain-implanted electrodes.  Different images were shown to these observers, with each sight appearing for only about a second. We read this:

"To test critical predictions of the theories, five experimental manipulations were included in the experimental design: (1) four stimulus categories (faces, objects, letters and false fonts), (2) 20 stimulus identities (20 different exemplars per stimulus category), (3) three stimulus orientations (front, left and right view), (4) three stimulus durations (0.5 s, 1.0 s and 1.5 s), and (5) task relevance (relevant targets, relevant non-targets and irrelevant)."

So apparently subjects were shown pictures for a tiny instant, ranging from between half a second and 1.5 seconds. The pictures might have been a picture of a face, an object, a letter such as A or B, or a "false font."  The authors claim to have done "decoding of conscious content" from analyzing data obtained from these subjects: fMRI brain scan data, EEG brain wave data, eye movement data using an eye movement tracker, and magnetoencephalography  brain scan data. The claim is misleading. No robust evidence of any "decoding of conscious content" has occurred. 

To try to back up this claim of "decoding of conscious content," we have a Figure 2 that shows us some result obtained by an AI-type pattern recognizer after analyzing both EEG brain wave data and eye movement data gathered using an eye tracker device (the Eyelink 1000 Plus system shown in a photo below).  We see a "decoding accuracy" graph in which accuracy above 50% completely dies off after 1 second of someone seeing the visual stimulus. This is for 29 subjects who had intracranial electrodes inserted into their brains. The analytics are black-box analytics, and it is hard to unravel what flaws or tricks may have gone on to get these results. A look at the programming code used shows a byzantine maze of spaghetti code. No evidence is provided of being able to predict from brain data alone what a person is thinking or imagining. All we have is some attempt to show that by analyzing brain wave data and eye movement data taken at the instant someone was seeing something, you can predict the category of what the person was seeing. 

It is well known that EEG readings are extremely sensitive to muscle movements, which cause blips in the lines picked up electrodes. So imagine some experiment in which you get EEG readings while someone is shown a picture that may be a recognizable face, a picture of a cute or scary animal, or something neutral like the letter "X" or "Y."  A person might more often make muscle movements when seeing certain types of images. He might give a smile of recognition or appreciation when seeing a celebrity's face or a kitten, or he might squint when seeing some puzzling image, or he might raise his eyebrow when seeing some scary image; or he might grimace when seeing an offensive image. No such muscle movements might occur when the person sees something like the letter X or the letter Y. From such muscle movements alone, an AI pattern classifier might be able to guess higher than 50% what the category was of the thing the person saw. But that would not be "decoding of conscious content." 

Then there's the fact that the eye movement data gathered by some hi-tech eye movement device could have tended to pick up eye movement differences when different categories of images were shown.  Show a human a picture of face, and his eye will tend to focus on the face, or his eye may widen if he is surprised. Show a human a picture of a mere character (such as X) or some meaningless symbol, and the person's eye will not tend to focus as strongly, and it will not widen. 

So the evidence presented for "decoding of conscious content" in this paper is not robust evidence of being able to detect the type of thing someone is thinking about or seeing by analyzing brain data. The evidence the paper presents is all data based on "the moment of perception," when different type of muscle movements or eye movements may have occurred when different type of things are seen. 


EEG is sensitive to muscle movements

Figure 3 in the paper is very similar to Figure 2. We are shown a line graph which seems to indicate some above-average predictive success coming from analyzing iEEG brain wave data (and also eye movement data) coming from 31 patients with implanted electrodes. The claimed success is purely in predicting the category of a type of image someone saw.  But the predictive success only occurs at the half-second mark, vanishing at the one second mark.  The result is consistent with the idea that the claimed predictive success comes purely from picking up different types of muscle movements (such as eye movements) which occur differently when a person has different types  of facial expressions in reacting to things he sees. 

The paper is one of many neuroscience papers which makes false claims about neural representations. There is no evidence that the brain contains any representations of anything anyone learns, recalls or sees. But neuroscientists love to claim that this or that thing they see in the brain is a "representation" of something. In this case the authors again and again refer to "representations" in the brain, without producing any good evidence for any such thing. 

An example of the paper's misrepresentations about representations is its statement "In posterior cortex ROIs, cross-temporal RSA revealed sustained face–object categorical representation." The evidence given for this claim is Figure 3D, which shows no sign of anything beyond the 1.5 second mark after someone saw something.  Whatever is being graphed is some momentary response to a stimulus, and it is  misleading to refer to that as either "sustained" or a "representation."  A similar misstatement would occur if I showed you a picture of something disgusting, and then claimed that your momentary facial expression was a representation of what I showed you. Momentary responses are not representations. 

The authors of this study have failed to produce any robust evidence for either the global workspace theory or the integrated information theory, and the authors give a kind of "it's a draw" verdict about their results, without saying that either theory was the winner. The global workspace theory is not a credible theory of consciousness, for reasons discussed hereThe integrated information theory theory is not a credible theory of consciousness, for reasons discussed here and here

We have in this study a classic example of how neuroscientists resort to "something else" kind of cheats. Here's how it works:

(1) A neuroscientist will produce a study claiming to have determined something or predicted something based on brain data. 
(2) Sneaked into the study design will be some other source of data other than brain data.  That "something else" may be some software facility that the study is using, such as a database that has text annotations corresponding to images subjects were shown. Or the "something else" may be an eye-tracking system, which allows the study to make predictions not merely on brain data, but on how a person's eyes are behaving. Or the "something else" may be any number of other things, such as some AI system that predicts words someone is about to state, based on historical tendencies of people to say one word after saying a previous word. 
(3) Misleadingly it will be claimed or insinuated (in either the paper itself or the paper's press release) that the study predicted successfully based only on brain data, when any predictive success was crucially dependent on something other than just brain data. 

In this study the sleazy "something else" was eye movement data gathered by some high-tech eye tracker in addition to the EEG data  being taken to detect brain waves. The paper tells us that the Eye Link 1000 Plus system was used. Below is how that system looks (from a page promoting that system). 


Figure 4 of the paper shows the same defects as Figure 3 and Figure 2, as no predictive success beyond the 1.5 second mark is shown, and there is the same reliance on a combination of EEG brain wave data and eye-scanning data, which cannot be called a prediction from brain states alone. Figure 4 is even less reliable as evidence that Figure 2 and Figure 3, because the sample size used is much less than 31. 

In the Supplementary Notes, we read about this funny business going on:

"In the preregistration document, it is stated that iEEG patients with poor behavioral performance, defined as <70% hits or >30% FAs, were to be excluded (Data quality checks and exclusion of subjects, page 15). This threshold was considered based on a target recruitment of 50 patients. However, due to the coronavirus pandemic and despite our best efforts, only 34 patients were collected at the time of manuscript completion. To weigh the pros and cons of data inclusion and to increase sample size and coverage to better test the theories, it was decided to include in the analysis three iEEG patients whose behavior fell marginally short of the predefined behavioral criteria (i.e., hits < 70%, FA > 30%) to compensate for the lower number of participants."

So the authors set a standard for subjects that would be included, and found that they did not have enough subjects if that standard were to be followed. So the standard was then lowered.  But even with that  bit of malfeasance, was the study group size adequate for a good statistical power? We don't know, because no sample size calculation was done

The results in this study were highly dependent upon the patients with implanted electrodes, referred to in the paper as iEEG subjects. These were very sick patients with treatment-resistant epilepsy, who were being evaluated for surgery, through a method in which electrodes were implanted to try and find suitable spots for surgery. We are told, "A total of 4,057 electrodes (892 grids, 346 strips and 2,819 depths) were implanted across 32 patients with drug-resistant focal epilepsy undergoing clinically motivated invasive monitoring."  So each of these patients had an average of about 126 electrodes implanted in their brains. Most of the times people have electrodes implanted for epilepsy surgery evaluation, it is a much smaller number of electrodes such as only 20.  

A key question is: were all these electrode implantations medically necessary? Or was there only a medical need to implant a much smaller number of electrodes?  Were many of the risky electrode implants into the brains of these sick patients done purely for the sake of this poorly designed study? We do not know the answer to these questions, because the authors have not told us. They have not made any claim that all of the electrode implants were medically necessary. 

The abuse of very sick epilepsy patients is one of the most appalling scandals of modern experimental neuroscience. Neuroscientists hungry for brain data are luring very sick epilepsy patients into agreeing to implants inside their brains of more electrodes than are needed for surgical evaluation. When this happens, the patient undergoes very serious risks that are not medically necessary, for the sake of the research needs of the neuroscientist and not the needs of the patient. A paper tells us this:

"A recent meta-analysis reviewed complication rates and types of complications in patients undergoing subdural grid implantation for seizure mapping [41]. The most common complication which was reported was intracranial haemorrhage with a mean rate of 4% closely followed by other complications such as neurologic infections, superficial infections and elevated intracranial pressure. They also found that an increased number of electrodes (>67 electrodes) was independently associated with complications."

Another paper tells us this:

"There are definite medical risks associated with the use of intracranial electrodes. The complication rate of subdural electrodes has been reported to range between 6% and 26%. Relatively common adverse events associated with subdural electrodes are fever, headache, and nausea. Another group reported transient cerebrospinal fluid (CSF) leakage (13–31%), infection (6–8%), intracranial bleeding (8%), and cerebral edema in addition to an intracranial mass effect. Nair et al. reported that complications included (in the order of their frequency) infection, transient neurological deficit, epidural hematoma, increased intracranial pressure, and infarction. An increase in the complication rate was associated with (a) a greater number of grids/electrodes, (b) longer duration of monitoring, (c) older age of the patient, (d) left-sided grid insertion, (e) the use of burr holes in addition to craniotomy, and (f) an earlier year of monitoring (most likely a reflection of the aforementioned surgeon’s experience)."

The authors of any paper that reports on readings of electrodes implanted in the brains of epilepsy patients  have a duty to fully inform us about whether epilepsy patients were endangered by the implantation of additional electrodes that were not medically necessary, and which were implanted mainly for the research purposes of the paper authors.  Any such paper authors that fail to do that are authors we should tend to distrust. 

No comments:

Post a Comment