Sunday, June 28, 2020

Long Article Tries to Show Neural Memory Storage, but Gives No Real Evidence for It

In Discover magazine, there was recently a long article entitled “What Happens in Your Brain When You Make Memories?” An article like this is an attempt to convince us that scientists have some good understanding of how a brain could store memories. But the article completely fails at such a task, and provides no substantial evidence for any such thing as neural memory storage.

We are told the following: “In the 1990s, scientists analyzed high-resolution brain scans and found that these fleeting memories depend on neurons firing in the prefrontal cortex, the front part of the brain responsible for higher-level thinking.” There is no actual evidence that the front part of the brain is responsible for high-level thinking. You can read here for evidence that specifically contradicts such a claim. 

The quote above includes a link to a brain scanning scientific paper. That paper provides no evidence that memories depend on neurons firing anywhere. In any type of brain scanning study, the two main questions to ask are:  how many subjects were used, and what was the percent signal change detected during some supposed activation of some brain region? The paper does not tell us either of these things. It mentions some brain scanning study, but does not tell any details of how many subjects it used, or what percent signal change was detected. We can only assume that the study was one of those ridiculously common studies that either: (1) used too small a sample size to get a result of good statistical power, or (2) detected only meaningless signal changes such as less than 1%, the type of differences we would expect to get by chance, or (3) had both of these problems. When scientists use impressive sample sizes or when they get impressive brain scanning results regarding percent signal changes, they almost always tell us about such a thing. When there is a failure for a paper to mention either of these numbers, we should assume it is because the numbers were not impressive, and not good evidence.

The article then states the dogma that memories form when synapses are strengthened: “When a long-term memory is formed, the connections between neurons, known as synapses, are strengthened.” There is no evidence that this is true. When stating the sentence above, the article has a link to a paper that provides no evidence that memory storage involves synapse strengthening.

In fact, there are reasons why it cannot be true that memories are formed by synapses being strengthened. The first is that synapses are too unstable to be a permanent storage place for memories. The proteins in synapses have an average lifetime of only a few weeks. But humans can accurately remember things for 60 years, which is 1000 times longer than 50 weeks. Synapses do not last for very long. The paper here says that half-life of synapses is "from days to months." The 2018 study here precisely measured the lifetimes of more than 3000 brain proteins from all over the brain, and found not a single one with a lifetime of more than 75 days (figure 2 shows the average protein lifetime was only 11 days). 

The second reason is that humans are able to instantly form permanent new memories at a rapid clip. This was shown in an experiment in which humans were able to remember fairly well images they were only exposed to for a few seconds. The experiment is described in the scientific paper “Visual long-term memory has a massive storage capacity for object details.” The experimenters showed some subjects 2500 images over the course of five and a half hours, and the subjects viewed each image for only three seconds. Then the subjects were tested in the following way described by the paper:

"Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high  (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images."

Let us imagine that memories were being stored in the brain by a process of synapse strengthening. Each time a memory was stored, it would involve the synthesis of new proteins (requiring minutes), and also the additional time (presumably requiring additional minutes) for an encoding effect in which knowledge or experienced was translated into neural states. If the brain stored memories in such a way, it could not possibly keep up with remembering most of thousands of images that appeared for only three seconds each.

In the Discover magazine article, we are then told an inaccurate legend of scientific achievement: “ In a 2012 Nature study, Tonegawa and researchers at MIT and Stanford University used optogenetics to demonstrate that our memory traces do indeed live in specific clusters of brain cells.” No, Susumu Tonegawa and his colleagues did not do any such thing. In the post here you can read a rather lengthy discussion of various memory-related papers authored by people working at Tonegawa's MIT memory laboratory. These papers suffer from a common defect of using too-small sample sizes. Again and again when looking up the memory-related papers authored by people working at Tonegawa's MIT memory laboratory, I found papers that used sample sizes so small that were not good evidence for anything. In a neuroscience experiment, the absolute minimum for a somewhat compelling result is 15 animals per study group (and in most cases the number of animals per study group should be much higher, such as 25 or more). But again and again when looking up the memory-related papers authored by people working at Tonegawa's MIT memory laboratory, I found papers that used sample sizes of 10 or smaller. Such papers are not good evidence for anything.

In the Discover magazine article, we have a clear description of the utterly fallacious experimental technique used by Tonegawa, a technique that has given him the wrong idea that he has found a memory in brains. Here is what the article says:

"In the paper, the research team describes how they pinpointed a particular group of neurons in the hippocampus, a part of the brain involved in the formation of long-term memories, that start firing under certain conditions. In this case, the researchers did so by having mice explore an unfamiliar cage. '[Then] you give [the mouse] mild electric shocks to their footpads,' says Tonegawa. 'And the mouse will immediately form a memory that this cage is a scary place.' The next day, says Tonegawa, when the mice were placed in the cage without being zapped, this conditioning led them to fear that environment. The researchers later injected the rodents with a protein that can trigger brain cells — specifically, the neurons in the hippocampus that the scientists were targeting — by flashing them with blue light. ''These proteins have a chemical property to activate cells when light of a particular wavelength is delivered,' adds Tonegawa. Then, when the scientists flashed the mice with pulses of light in an entirely different environment, the neurons in the hippocampus they had labeled with the protein sprung into action — and the mice froze in place. The researchers think the animals were mentally flashing back to the experience of being shocked. 'That’s the logic of the experiment,' says Tonegawa. 'You can tell that these neurons, which were labeled yesterday, now carry those memory engrams' ”

There are two reasons why this technique is fallacious and unreliable, and does not provide any evidence at all that memories are stored in the brains of these mice. The first is that when the brains of the mice are being flashed with pulses of light, this is a stimulation effect that itself may be causing a freezing effect causing a mice to “freeze in place,” even though no fearful memory is being recalled by the mice.  In fact, it is known that stimulating many different regions of a rodent brain will cause a mouse to “freeze in place.” A science paper says that it is possible to induce freezing in rodents by stimulating a wide variety of regions. It says, "It is possible to induce freezing by activating a variety of brain areas and projections, including the hippocampus (Liu et al., 2012), lateral, basal and central amygdala (Ciocchi et al., 2010); Johansen et al., 2010; Gore et al., 2015a), periaqueductal gray (Tovote et al., 2016), motor and primary sensory cortices (Kass et al., 2013), prefrontal projections (Rajasethupathy et al., 2015) and retrosplenial cortex (Cowansage et al., 2014).”  Therefore there is no reason at all to assume that the “freeze in place” is actually being caused by a recall of a memory. The “freeze in place” effect could be caused simply by the stimulation being delivered to the brains of the mice, without any recall occurring.

The second reason why such an experiment is no evidence at all for memory storage in a brain is that “freezing behavior” in mice is very hard to reliably measure. In a typical paper, judgments of how much a mouse froze will be based on arbitrary, error-prone human judgments. The reliable way to measure fear in mice is to measure their heart rate, which goes up very suddenly and rapidly when mice are afraid. But inexplicably, neuroscientists almost never use such a technique. Since scientists like Tonegawa do not use reliable techniques for determining whether rodents are afraid, and since the experiments depend on assumptions that the animals were afraid,  we should have no confidence in the results of experiments like those described above.

freezing behavior in rodents

The Discover magazine article then proceeds to describe some work by neuroscientist Nanthia Suthana, in which epilepsy patients had their brains scanned when using video games.  We are told that some evidence was found that some kind of brain wave called theta oscillations was more common during memory recall. But we are not told how large an effect size was found, and have no way of knowing whether it was merely some borderline result unlikely to be replicated. We are not given a link for any paper that has been published,  and we are told that there are merely two papers "in peer review." We have no mention of how many subjects were used.   And memory retrieval is something quite different from memory storage.  These are all quite a few reasons why such an experiment is not anything like substantial evidence for any neural storage of memories.

The last gasp of the Discover article is to claim that "Sah and his colleagues used optogenetics in rats to identify the circuitry in the brain that controls the return of traumatic memories."  The "return of traumatic memories" refers to memory retrieval, which is an entirely different thing from memory storage.  We are given a link to some study behind a paywall, and the abstract mentions no actual numbers, meaning we have no basis for any confidence in it.  Given the rampant sample size problem in experimental neuroscience, in which too-small study groups are being used in most studies, we should have no confidence in any study if we merely can read an abstract that does not mention how large a study group was used.

Despite its long length, the Discover article fails to give us any solid piece of evidence suggesting that memories are stored in brains.  The Discover article is a kind of Exhibit A to back up my claim that scientists have no actual evidence basis for believing that memories are stored in brains.  Their "best evidence" for such claims are "house of cards" studies that do not meet the requirements of compelling experimental science.  We have no solid scientific basis for believing that memories are stored in brains, but we do have good scientific reasons for believing that memories cannot be stored in brains.  One such reason is that people do not suffer substantial losses of learned information when half of their brain is removed in hemispherectomy operations.  See the paper here for a discussion of 8 people who had "no observable mental changes" after removal of half of their brains. The paper specifically mentions "their memory was unimpared."  The second reason is that the proteins that make up the synapses of the brain have average lifetimes 1000 times shorter than the maximum length of time (60 years) that humans can retain memories. 

Tuesday, June 16, 2020

Study Finds "Poor Overall Reliability" of Brain Scanning Studies

For decades neuroscientists have been trying to use brain imaging to get evidence that particular regions of the brain cause particular mental effects.  The technique they use typically works like this:

(1) Put a small number of subjects in an MRI brain scanner, and either have them do some mental task or expose them to some kind of mental stimulus.
(2) Use the brain scanner to make images of the brain during such activity.
(3) Then analyze the brain scans, looking for some area of higher activation.

Often sleazy and misleading techniques are used to present the data from such studies. Techniques are very often used that make very small differences in brain signal strength look like very big differences.  A discussion of such techniques, which I call "lying with colors" can be read here.

Claims that particular regions of the brain show larger activity during certain mental activities are typically not well-replicated in followup studies. A book by a cognitive scientist states this (page 174-175):

"The empirical literature on brain correlates of emotion is wildly inconsistent, with every part of the brain showing some activity correlated with some aspect of emotional behavior. Those experiments that do report a few limited areas are usually in conflict with each other....There is little consensus about what is the actual role of a particular region. It is likely that the entire brain operates in a coordinated fashion, complexly interconnected, so that much of the research on individual components is misleading and inconclusive."

An article on states the following:

"Small sample sizes in studies using functional MRI to investigate brain connectivity and function are common in neuroscience, despite years of warnings that such studies likely lack sufficient statistical power. A new analysis reveals that task-based fMRI experiments involving typical sample sizes of about 30 participants are only modestly replicable. This means that independent efforts to repeat the experiments are as likely to challenge as to confirm the original results."

There have been statistical critiques of brain imaging studies. One critique found a common statistical error that “inflates correlations.” The paper stated, “The underlying problems described here appear to be common in fMRI research of many kinds—not just in studies of emotion, personality, and social cognition.”

Another critique of neuroimaging found a “double dipping” statistical error that was very common. New Scientist reported a software problem, saying “Thousands of fMRI brain studies in doubt due to software flaws.”

Flaws in brain imaging studies were highlighted by a study that found "correlations of consciousness" by using an fMRI brain scan on a dead salmon. See here for an image summarizing the study.  The dead salmon study highlighted a problem called the multiple comparisons problem. This is the problem that the more comparisons you make between some region of the brain and an average, the more likely you will be to find a false positive, simply because of chance variations. A typical brain scan study will make many such comparisons, and in such a study there is a high chance of false positives. 

Considering the question of “How Much of the Neuroimaging Literature Should We Discard?” a PhD and lab director states, “Personally I’d say I don’t really believe about 95% of what gets published...I think claims of 'selective' activation are almost without exception completely baseless ”  This link says that a study, "published open-access in the Proceedings of the National Academy of Sciences, suggests that the methods used in fMRI research can create the illusion of brain activity where there is none—up to 70% of the time."

A new study has raised additional concerns about the use of brain imaging in neuroscience.  The study was announced in a Duke University press release entitled, "Studies of Brain Activity Aren't as Useful as Scientists Thought."  The study discusses a meta-analysis which looked at the question of how reliably there occurs a region of higher brain activation, in cases when a particular subject had his brain scanned at two different times. 

What neuroscientists would like for there to be is a tendency to get the same result in two different scans of a person's brain taken on two different days, when the person was engaged in the same activity or exposed to the same stimulus.  But that doesn't happen.  We read the following in the press release, which quotes Ahmad R. Hariri:

"Hariri said the researchers recognized that 'the correlation between one scan and a second is not even fair, it’s poor.'...For six out of seven measures of brain function, the correlation between tests taken about four months apart with the same person was weak....Again, they found poor correlation from one test to the next in an individual. The bottom line is that task-based fMRI in its current form can’t tell you what an individual’s brain activation will look like from one test to the next, Hariri said....'We can’t continue with the same old ‘"hot spot" research,' Hariri said. “We could scan the same 1,300 undergrads again and we wouldn’t see the same patterns for each of them.”

The press release is talking about a scientific study by Hariri and others that can be read here.  The study is entitled, "What is the test-retest reliability of common task-fMRI measures? New empirical evidence and a meta-analysis." The study says, "We present converging evidence demonstrating poor reliability of task-fMRI measures...A meta-analysis of 90 experiments (N=1,008) revealed poor overall reliability."

In a neuoscience study, the sample size is how many subjects (animal or human) were tested. Figure 1 of the Hariri study deserves careful attention. It has three graphs comparing the kind of sample sizes we would need to get reliable results in brain imaging studies (ranging from between 100 and 1000) to the median samples size of brain image studies (listed as only 25).  This highlights a problem that I have many times written about: that the sample sizes used in neuroscience studies are typically way too small to produce reliable results. As it happens, the problem is even worse than depicted in Figure 1, because the median sample size of a neuroscience study is actually much less than 25. According to the paper here,  "Highly cited experimental and clinical fMRI studies had similar median sample sizes (medians in single group studies: 12 and 14.5; median group sizes in multiple group studies: 11 and 12.5)."

Neuroscientists have known about this shortcoming for years. It has been pointed out many times that the sample sizes used in neuroscience studies are typically way too small for reliable results. But our neuroscientists keep grinding out countless studies with too small a statistical power. In the prevailing culture of academia, you are rewarded for the number of papers published with your name on it, and not too much attention is paid to the reliability of such studies. So if you are a professor with a budget that is sufficient to fund either 100 fMRI scans on 100 subjects in a single study of relatively high reliability, or 10 little low-reliability studies with only 10 subjects each, the prevailing rewards system in academia makes it a better career move for you to do 10 unreliable studies resulting in 10 separate papers rather than a single study resulting in a single paper.

Figure 5 of the Hariri study is also interesting. It rates reliability in various tests of mental activity while subjects had their brains scanned at two different times.  There's data for a single task involving memory, which failed to reach a reliability of either "excellent" or "good."  This task involved a retest of only 20 different subjects. On the left of the figure, we have results for an Executive Function (EF)  test tried twice on 45 subjects, and a "relational" test tried twice on 45 subjects. The relational test is discussed here.  In the test you have to look at some visual figures, and mentally discern whether the type of transformation (either shape or texture) that occurred in a first row of figures was the same transformation used in the second row of figures.

So we have here the interesting case of two thinking tasks applied to 45 subjects on two different days, while their brains were scanned. This makes a better-than-average test of whether some brain region should reliably be activated more strongly during thinking.

The result was actually a flop and a fail for the hypothesis that your brain produces thinking.  In the Executive Function test (corresponding to the third column of circles shown below), none of the 8 brain regions examined produced a greater activation that appeared to an extent that was either Excellent, Good, or Fair.  In the relational test (corresponding to the fifth column of circles shown below), none of the 8 brain regions examined produced a greater activation that appeared to an extent that was either Excellent, Good, or Fair.  The figure is shown below:

brains do not cause thinking
Figure 5 of the Hariri study (link)

The brain regions used in the tests graphed above were not random brain regions, but were typically the regions thought most likely to produce a correlation.

Such results are quite consistent with the claim I have long made on this blog that the brain is not the source of the human mind, and is not the source of human thinking.

Friday, June 5, 2020

Global Workspace Theory Sure Isn't an Explanation for Consciousness

Neuroscientists have no credible explanations for the most important mental phenomena such as consciousness and memory. All that scientists have in this regard are some far-fetched speculations or weak theories that don't hold up to scrutiny.  Supposedly the two most popular theories of consciousness proposed by scientists are one theory called integrated information theory and another theory called global workspace theory. You can read here why integrated information theory does not work as a credible theory of consciousness.  Global workspace theory isn't any better.

The wikipedia article on global workspace theory starts out by explaining it this way:

"GWT can be explained in terms of a 'theater metaphor.' In the 'theater of consciousness'  a 'spotlight of selective attention' shines a bright spot on stage. The bright spot reveals the contents of consciousness, actors moving in and out, making speeches or interacting with each other. The audience is not lit up—it is in the dark (i.e., unconscious) watching the play. Behind the scenes, also in the dark, are the director (executive processes), stage hands, script writers, scene designers and the like. They shape the visible activities in the bright spot, but are themselves invisible."

As a causal explanation for why a brain might be able to produce understanding or consciousness, this is a complete failure, as it does not refer to anything in the brain, but refers to some theater. At most it is some metaphor merely describing selective attention, but selective attention (or mental focus) is merely an aspect of understanding once it exists, not an explanation of consciousness or understanding.  You can't spotlight your way to consciousness. Also, there's nothing in the brain that actually corresponds physically to a spotlight. When you're thinking about something, it is not at all true that some particular region of your brain lights up like some area under a spotlight, contrary to the misleading statements and misleading visuals often given on this. Actual signal strength differences (typically far less than 1%) are no greater than we would expect from random variations.  

In an interview in Scientific American, Bernard Baars attempts to explain global workspace theory, but fails rather miserably to give a coherent explanation of how global workspace theory is anything like a theory explaining consciousness.   He is asked by the interviewer, "What is global workspace theory?" What we then get from Baars is  an answer that kind of wanders around all over the place for 11 paragraphs without giving much of any answer that anyone will be able to grasp. 

There is some mention of some swarm computing setup: "If you put a hundred crummy algorithms together and let them share hypotheses and vote on the most popular one, it turns out that very inadequate algorithms could jointly solve problems that no single one could solve." There is entirely irrelevant for any explanation of consciousness or understanding, because particular areas of the brain are not like little micro-processors running software code. There is nothing like software code that runs anywhere in the brain. 

Baar's rambling and muddled answer to the question ends like this:

"Part IV of my latest book On Consciousness: Science & Subjectivity develops GW dynamics, suggesting that conscious experiences reflect a flexible 'binding and broadcasting' function in the brain, which is able to mobilize a large, distributed collection of specialized cortical networks and processes that are not conscious by themselves. Note that the 'broadcast' phase proposed by the theory should evoke widespread adaptation, for the same reason that a fire alarm should evoke widespread responding, because the specific needs for task-relevant responders cannot be completely known ahead of time. General alarms are interpreted according to local conditions. A brain-based GW interacts with an 'audience' of highly distributed, specialized knowledge sources, which interpret the global signal in terms of local knowledge (Baars, 1988). The global signal triggers reentrant signaling, resonance is the typical activity of the cortex."
Baar's scrambled 11-paragraph answer is a complete failure as an attempt to explain how a brain could produce consciousness or understanding. Electrical signals travel around in the brain, but there is nothing like a broadcast in the brain that could explain consciousness or understanding.  And it's rather silly to be trying to use fire alarms as part of an attempt to explain consciousness or understanding. 
To understand how impotent the idea of broadcasting is to explain consciousness or understanding, let's consider the city I grew up in. When I was a boy there were in my city several very high broadcast towers that broadcasted TV signals and radio signals. Almost every house in the city had an old-fashioned TV that picked up these TV signals, and also an old-fashioned radio that picked up the old-fashioned radio signals. But none of this huge amount of broadcasting and broadcast reception resulted in the slightest bit of consciousness in any of the antennas, the television sets or the radios.  The idea of broadcasting is worthless in explaining consciousness. 
We cannot at all explain consciousness by saying that it adds up from the activity of a bunch of networks that "are not conscious by themselves."  There is no reason why the activity of a bunch of unconscious networks should add up to be a conscious reality, any more than having a house made of bricks should add up to be a wooden house.

The reality in the brain is that there are billions of cells that each emits electrical signals.  A rough analogy might be a packed stadium with 80,000 people who are each making noise during a football game.  But still you have a unified self and a unified stream of thought from a mind. There's not the slightest reason why that would emerge from the activity of billions of individual neurons, just as there's not the slightest reason why a single paragraph of speech would ever flow from the lips of 80,000 people in a stadium.

A broadcast is a stream of tokens that can give information to an agent capable of understanding who is listening to such a broadcast. But a broadcast does nothing to ever produce such an agent of understanding.  The flow of tokens during a broadcast is rather like the stream of bullets from a machine gun.  Thinking that you can broadcast your way to consciousness is as silly as thinking that you can machine-gun your way to consciousness.

Narrating an achievement legend that is groundless (something very common these days in academia), Baars makes these mostly false claims:
"Our individuality is a function of the cortex, which is now proven by brain studies to be 'the organ of consciousness.' Wilder Penfield discovered that in 1934 via open-brain surgeries in fully awake patients, who were able to talk with him and gesture."
The brain is an organ, and the cortex is not an organ, but only a small fraction of an organ. So calling the cortex "the organ of consciousness" is nonsense.  There are no brain studies showing that the cortex produces consciousness. To the contrary, we know that after hemispherectomy operations in which half of the cortex (and half of the brain) is removed and discarded, to stop very bad seizures,  people are just as conscious and just as intelligent as they were before such an operation.  And we also know from the studies of people like physician John Lorber that people have existed with very good consciousness and above-average intelligence, even though they had brains and cortexes that were almost entirely destroyed by disease. Such medical case histories debunk claims that the cortex is the source of consciousness. Of course, an operation by Penfield in which people can talk and gesture during brain surgery does absolutely nothing to establish that the brain or the cortex is the source of consciousness.  So it is wrong for Baars to be citing such a thing as evidence that Penfield discovered that consciousness comes from the cortex.

Baars has tried to suggest the idea that consciousness comes from a broadcasting of something from the cortex. But the cortex of the brain is an actually an extremely bad broadcaster.  Electrical signals in the cortex travel from one neuron to another with a very low reliability.  It has been estimated that the chance of an action potential traveling between two adjacent neurons in the cortex is below 50%, and as low as 10%.  A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower." It's implausible to be saying that cortex cells that are such bad and unreliable information transmitters (such bad broadcasters) are somehow giving rise to consciousness through some kind of broadcasting effect. 
Baars has some book describing his ideas on this topic.  But I see no reason why anyone should buy such a book, because nothing that Baars states in his Scientific American interview should give us any confidence that he has any substantive explanation for how a brain could produce consciousness, thinking or understanding.  When asked about the "hard problem of consciousness" he states there is no evidence for it, which makes no sense, and is like saying there is no evidence for the problem of the origin of language or the problem of the origin of life.