When talking about the problem of explaining human minds, those in academia love to use the term "problem of consciousness." But is is a huge fallacy to think there is merely some "problem of consciousness" when there is a trillion times bigger "problem of explaining human minds, human mental capabilities and human mental phenomena." Once you realize this, you may realize that presenting some "theory of consciousness" can never do much to solve the explanatory problems in the philosophy of mind, which are huge and "all over the place."
The trick of posing a mere "problem of consciousness" is a ridiculous ruse. A human being is not merely "some consciousness." A human being is an enormously complex reality, and the mental reality is as complex as the physical reality. You dehumanize and degrade human beings when you refer to their minds as mere "consciousness." The problem of human mentality is the problem of credibly explaining the forty or fifty most interesting types of human mental experiences, human mental characteristics and human mental capabilities.
It is always a silly, stupid trick when someone tries to reduce so complex a reality to try and make it sound like the faintest shadow of what it is, by speaking as if there is a mere "problem of consciousness," and talking as if humans are just "some consciousness" that needs to be explained. Such a shabby, pathetic trick (which can be called consciousness shadow-speaking) is as silly as ignoring the vast complexity of the organization of the human body, and speaking as if explaining the origin of human bodies is just a task of explaining how there might occur "some carbon concentrations."
The person attempting so pathetic a trick is acting as silly as a person who stands at the seashore, fills a glass with seawater, and says, "Oceans are easy to explain -- they're just water." Just as the ocean includes trillions of deep, baffling complexities such as all of the organization and biochemistry of sea creatures -- something infinitely more complex than mere water -- the human mind and human mental experiences involve trillions of complexities, and such a reality is something almost infinitely more complex than mere "consciousness." The reductionist who engages in consciousness shadow-speaking is someone engaging in a trick as misleading as someone who says, "Mathematics is real simple -- it's just counting."
The majority of people who try to reduce the mountain-sized problem of explaining human minds and human mental experiences in all their variety into the mouse-sized problem of explaining some mere dry abstraction of "consciousness" are people who were too lazy to very deeply study minds and brains, and who used this stupid trick of consciousness shadow-speaking to try to make their explanation job a million times easier. People who lack credible explanations for very complex realities (whether physical or mental) love to use poorly descriptive language in which they try to make the complex realities sound a million times simpler than they are.
The dialog below illustrates the stupidity of trying to explain human minds by describing a human mind as mere "consciousness" and then trying to create a "theory of consciousness" that applies to everything conscious.
James: John, I've made great progress in explaining how the human body arises during a mother's pregnancy. John: Great, tell me about it. James: I call my explanation a “theory of solidity.” John: A theory of solidity? James: Yes, because that's the essential nature of human bodies, that they are solid. So my theory attempts to explain how solidity arises. John: I think you've gone in the wrong direction, and made a big mistake. James: Why? John: Because a human body is something gigantically greater than mere “solidity.” A human body is a state of vast hierarchical organization, with a oceanic level of functional complexity. For example, in our bodies are 20,000 different types of protein inventions, most very special arrangements of many thousands of atoms. And we have 200 types of cells, each so complex they are compared to factories. You would do nothing to explain so impressive a reality of physical organization by merely explaining “solidity.” Your body is something gigantically more than mere “solidity.” | James: John, I've made great progress in explaining how the human mind arises. John: Great, tell me about it. James: I call my explanation a “theory of consciousness.” John: A theory of consciousness? James: Yes, because that's the essential nature of human minds, that they are conscious. So my theory attempts to explain how consciousness arises. John: I think you've gone in the wrong direction, and made a big mistake. James: Why? John: Because a human mind is something gigantically greater than mere “consciousness.” You and I are not merely “some consciousness.” We are thinking, believing, seeing, reading, hearing, loving imagining minds with insight, emotions, viewpoints, and a great variety of mental powers such as instant learning ability, the ability to hold memories for decades, and the ability to instantly recall knowledge when only hearing a word or seeing a face. Human minds and human mental experiences are a reality of oceanic depth, so much more than mere “consciousness.” |
A recent study attempted to do a "showdown" between two different theories called "theories of consciousness," in an attempt to reveal a winner and a loser. The experimental study should be regarded with the greatest of suspicion, because of all the suspicions we should have about any thing at all calling itself a "theory of consciousness." An experimental showdown between two different theories calling themselves a "theory of consciousness" is rather like trying to do an experimental showdown between the theory of palm-reading and the theory of astrology. The paper is entitled "Adversarial testing of global neuronal workspace and integrated information theories of consciousness."
The paper makes quite a few dubious claims about predictions made by one or the other of these theories. We should treat with suspicion claims made about what is predicted by either of these theories (the global workspace theory and the integrated information theory). Scientists often make unwarranted claims that this or that theory predicts something. Often such claims are made to try to achieve some aura of predictive success for some theory. It works like this:
(1) A scientist claims that some theory he favors predicts the observation of X.
(2) The scientist then tries to show that X was observed.
(3) The scientist then says we should have confidence in the theory because it made a successful prediction.
Very often this is misleading in one way or another. The claim that the theory predicted the observation of X may be untrue. The claim that X was observed may be untrue. And just because some theory predicts something does not mean the theory is true or likely to be true. There are all kinds of false theories that may predict 1001 things, and some of those things may be true.
We read in the paper claims about predictions of two rival theories of consciousness:
"We tested three preregistered, peer-reviewed predictions of IIT and GNWT for how the brain enables conscious experience (Fig. 1a). Prediction 1 addresses the cortical areas holding information about different aspects of conscious content. IIT predicts that conscious content is maximal in posterior brain areas, whereas GNWT predicts a necessary role for PFC. Prediction 2 pertains to the maintenance of conscious percepts over time. IIT predicts that conscious content is actively maintained by neural activity in the posterior ‘hot zone’ throughout the duration of a conscious experience, whereas GNWT predicts ignition events in PFC at stimulus onset and offset, updating the global workspace, with activity-silent information maintenance in between. Prediction 3 examines interareal connectivity during conscious perception. IIT predicts sustained short-range connectivity within the posterior cortex, linking low-level sensory (V1/V2) with high-level category-selective areas (for example, fusiform face area and lateral occipital cortex), whereas GNWT predicts long-range connectivity between high-level category-selective areas and PFC."
We should treat with skepticism all of these claims that such statements are actually predictions of such theories, and we should note that none of the claimed "predictions" qualify as precise predictions or exact numerical predictions. The claimed "predictions" are wooly kind of statements that are vague enough to be claimed as true no matter what is observed. Also the claimed "predictions" are not clearly at odds with each other, meaning you don't actually have a situation which is suitable for doing observations and announcing that one of the theories is the winner and the other the loser.
To perform this dubious "showdown" of these two theories of consciousness, a large number of subjects had their brains scanned in fMRI machines, and another group had their eyes scanned while their brain waves were read using invasive brain-implanted electrodes. Different images were shown to these observers, with each sight appearing for only about a second. We read this:
"To test critical predictions of the theories, five experimental manipulations were included in the experimental design: (1) four stimulus categories (faces, objects, letters and false fonts), (2) 20 stimulus identities (20 different exemplars per stimulus category), (3) three stimulus orientations (front, left and right view), (4) three stimulus durations (0.5 s, 1.0 s and 1.5 s), and (5) task relevance (relevant targets, relevant non-targets and irrelevant)."
So apparently subjects were shown pictures for a tiny instant, ranging from between half a second and 1.5 seconds. The pictures might have been a picture of a face, an object, a letter such as A or B, or a "false font." The authors claim to have done "decoding of conscious content" from analyzing data obtained from these subjects: fMRI brain scan data, EEG brain wave data, eye movement data using an eye movement tracker, and magnetoencephalography brain scan data. The claim is misleading. No robust evidence of any "decoding of conscious content" has occurred.
To try to back up this claim of "decoding of conscious content," we have a Figure 2 that shows us some result obtained by an AI-type pattern recognizer after analyzing both EEG brain wave data and eye movement data gathered using an eye tracker device (the Eyelink 1000 Plus system shown in a photo below). We see a "decoding accuracy" graph in which accuracy above 50% completely dies off after 1 second of someone seeing the visual stimulus. This is for 29 subjects who had intracranial electrodes inserted into their brains. The analytics are black-box analytics, and it is hard to unravel what flaws or tricks may have gone on to get these results. A look at the programming code used shows a byzantine maze of spaghetti code. No evidence is provided of being able to predict from brain data alone what a person is thinking or imagining. All we have is some attempt to show that by analyzing brain wave data and eye movement data taken at the instant someone was seeing something, you can predict the category of what the person was seeing.
It is well known that EEG readings are extremely sensitive to muscle movements, which cause blips in the lines picked up electrodes. So imagine some experiment in which you get EEG readings while someone is shown a picture that may be a recognizable face, a picture of a cute or scary animal, or something neutral like the letter "X" or "Y." A person might more often make muscle movements when seeing certain types of images. He might give a smile of recognition or appreciation when seeing a celebrity's face or a kitten, or he might squint when seeing some puzzling image, or he might raise his eyebrow when seeing some scary image; or he might grimace when seeing an offensive image. No such muscle movements might occur when the person sees something like the letter X or the letter Y. From such muscle movements alone, an AI pattern classifier might be able to guess higher than 50% what the category was of the thing the person saw. But that would not be "decoding of conscious content."
Then there's the fact that the eye movement data gathered by some hi-tech eye movement device could have tended to pick up eye movement differences when different categories of images were shown. Show a human a picture of face, and his eye will tend to focus on the face, or his eye may widen if he is surprised. Show a human a picture of a mere character (such as X) or some meaningless symbol, and the person's eye will not tend to focus as strongly, and it will not widen.
So the evidence presented for "decoding of conscious content" in this paper is not robust evidence of being able to detect the type of thing someone is thinking about or seeing by analyzing brain data. The evidence the paper presents is all data based on "the moment of perception," when different type of muscle movements or eye movements may have occurred when different type of things are seen.
Figure 3 in the paper is very similar to Figure 2. We are shown a line graph which seems to indicate some above-average predictive success coming from analyzing iEEG brain wave data (and also eye movement data) coming from 31 patients with implanted electrodes. The claimed success is purely in predicting the category of a type of image someone saw. But the predictive success only occurs at the half-second mark, vanishing at the one second mark. The result is consistent with the idea that the claimed predictive success comes purely from picking up different types of muscle movements (such as eye movements) which occur differently when a person has different types of facial expressions in reacting to things he sees.
The paper is one of many neuroscience papers which makes false claims about neural representations. There is no evidence that the brain contains any representations of anything anyone learns, recalls or sees. But neuroscientists love to claim that this or that thing they see in the brain is a "representation" of something. In this case the authors again and again refer to "representations" in the brain, without producing any good evidence for any such thing.
No comments:
Post a Comment