Saturday, January 18, 2020

"Particle Experiences" and Other Dubious Ideas of Panpsychism

The book Galileo's Error: Foundations for a New Science of Consciousness by philosopher Philip Goff is a book with quite a few misfires. The biggest one is an extremely common one among today's philosophers. The error is to use the way-too-small term “problem of consciousness” in discussing current shortfalls in explaining the human mind.

What we actually have is an extremely large “problem of explaining human mental capabilities and human mental experiences” that is vastly larger than merely explaining consciousness. The problem includes all the following difficulties and many others:

  1. the problem of explaining how humans are able to have abstract ideas;
  2. the problem of explaining how humans are able to store learned information, despite the lack of any detailed theory as to how learned knowledge could ever be translated into neural states or synapse states;
  3. the problem of explaining how humans are able to reliably remember things for more than 50 years, despite extremely rapid protein turnover in synapses, which should prevent brain-based storage of memories for any period of time longer than a few weeks;
  4. the problem of how humans are able to instantly retrieve little accessed information, despite the lack of anything like an addressing system or an indexing system in the brain;
  5. the problem of how humans are able to produce great works of creativity and imagination;
  6. the problem of how humans are able to be conscious at all;
  7. the problem of why humans have such a large variety of paranormal psychic experiences and capabilities such as ESP capabilities that have been well-established by laboratory tests, and near-death experiences that are very common, often occurring when brain activity has shut down;
  8. the problem of how humans have such diverse skills and experiences as mathematical reasoning, moral insight, philosophical reasoning, and refined emotional and spiritual experiences;
  9. the problem of self-hood and personal identity, why it is that we always continue to have the experience of being the same person, rather than just experiencing a bundle of miscellaneous sensations;
  10. the problem of intention and will, how is it that a mind can will particular physical outcomes.

It is therefore a ridiculous oversimplification for philosophers to be raising a mere "problem of consciousness” that refers to only one of these problems, and to be speaking as if such a “problem of consciousness” is the only difficulty that needs to be tackled by a philosophy of mind. But that is exactly what Philip Goff does in his book. We have an indication of his failure to pay attention to the problems he should be addressing by the fact that (according to his index) he refers to memory on only two pages of his book, both of which say nothing of substance about human memory or the problems of explaining it. His index also contains no mention of insight, imagination, ideas, will, volition or abstract ideas. The book's sole mention of the problem of self-hood or the self is (according to the index) a single page referring to “self, as illusion.” The book's sole reference to paranormal phenomena is a non-substantive reference on a single page. Ignoring the vast evidence for psi abilities, near-death experiences and other paranormal phenomena (supremely relevant to the philosophy of mind) is one of the greatest errors of academic philosophers of the past fifty years.

Imagine a baseball manager who has a “philosophy of winning baseball games” that is simply “make contact with the ball.” If you had such a philosophy, you would be paying attention to only a very small fraction of what you need to be paying attention to in order to win baseball games. And any philosopher hoping to advance a credible philosophy of mind has to pay attention to problems vastly more varied than a mere “problem of consciousness” or problem of why some beings are aware.

Goff's philosophical approach is to try and sell the old idea of panpsychism. Around for a very long time, panpsychism is the idea that consciousness is in everything or that consciousness is an intrinsic property of matter. A panpsychist may argue that just as mass is an intrinsic property of matter, consciousness is an intrinsic property of matter.  

As shown by psychology textbooks that may run to 500 pages, the human mind (including memory) is an incredibly diverse and complicated thing, consisting of a huge number of capabilities and aspects. It has always been quite an error when people try to describe so complicated a thing as something simple and one-dimensional.  This is what panpsychists have always done when they try to reduce the mind to the word "consciousness," which they then describe as a "property." A property is a simple aspect of something that can be described by a single number (for example, weight is a property of matter, and length is a property of matter, both of which can be stated as a single number).  A mind is something vastly more complicated than a property.  

Goff commits this same simplistic error by trying to shrink the human mind to the word "consciousness" throughout his book, and then telling us on page 23 that consciousness is a "feature of the physical world," and telling us on page 113 that "consciousness is a fundamental and ubiquitous feature of physical reality." When I look up "feature," I find that it is defined to mean the same thing as "property": "a distinctive attribue or aspect of something."  Human minds are vastly more complicated than any mere "feature" or "property" or "aspect" or "attribute."  We are being fed simplistic pablum when we are told that our minds are some "feature" or "aspect" or "property." If you've started out with the vast diversity and extremely multifaceted richness of the human mind, and somehow ended with up a one-dimensional word such as "feature" or "aspect" or "property,"  you've gone seriously wrong somewhere. Call it a shrinkage snafu. 

So many professors act like masters of concealment by acting in so many ways to misrepresent the gigantic mental and biological complexity of human beings, as if they were so interested in covering up our complexities.   And so we always have utterly misleading cell diagrams included in our biology textbooks, which make it look like there are only a few organelles per cell (the paper here tells us that there are typically hundreds or thousands of organelles per cell). And so we have "cell types" diagrams, which make it look as if there are only a few types of cells (the human body actually has hundreds of types of cells). And so we have the false myth that DNA is a blueprint or a recipe for making humans,  false not only because of the lack of any such human specification in DNA, but also because of the naive error of speaking as if you could ever build an ever-changing supremely dynamic organism like a human (as internally dynamic as a very busy factory) through some mere recipe or mere blueprint like you would use to construct a static house or a static piece of food.  And so we have the complexity-concealing claim that the vastly organized systemic arrangements of the human body can be explained by the "stuff piles up" idea of the accumulation of mutations (as if something as complex as a city could be explained by something like what we use to explain snow drifts). And so we have the frequent reality-denying assertions that mentally humans are "just another primate" or that other mammals are "just like us." And so you have the great complexity concealment of speaking as if a human mind was mere awareness or consciousness that could be described as a "property" or "feature." 

Panpsychism creates the problem that we have to then end up believing that all kinds of inanimate things are conscious to some degree. If consciousness were to be some intrinsic property of matter, it would seem to follow that the more matter, the greater the consciousness. So we would have to believe that the large rocks in Central Park of New York City are far more conscious than we are. And we would also have to believe that the Moon is vastly more conscious than we are. But if such inanimate things are far more conscious than we are, why do they not give us the slightest indication that they are conscious? There is no sign of any intelligent motion in the comets or asteroids that travel through space. Instead they seem to operate according to purely physical principles, just exactly as if they had no consciousness whatsoever. That's why astronomers can predict very exactly how closely an asteroid will pass by our planet, and the exact day that it will pass by our planet. So it seems that Goff's claim on page 116 that panpsychism is “entirely consistent with the facts of empirical science” is not actually true. To the contrary, we see zero signs of any consciousness or will in any non-biological thing, no matter how great its size, contrary to what we would expect under the theory of panpsychism.

No sign of any Mind here (credit:NASA)

On page 113 Goff suggests that maybe it is just certain arrangements of matter that might be conscious.  Goff isn't being terribly clear when he tells us on page on page 113, "Most panpsychists will deny that your socks are conscious, while asserting that they are ultimately composed of things that are conscious." So what does that mean, that the threads of your socks are conscious? If a panpsychist tries to defend his beliefs by denying that all material things are conscious, this actually pulls the legs from out under the table of panpsychism, depriving it of any small explanatory value it might have.  Once you go from "all matter is conscious" to "only certain arrangements of matter," you still have the same problem in materialism, that there is no reason anyone can see why consciousness would appear from some particular arrangement of matter. 

It would seem that the panpsychist has a kind of dilemma: either maintain that consciousness is an intrinisc property of matter (leaving you perhaps with some very small explanatory power, but many absurd consequences such as large rocks being more conscious than humans), or maintain that only special arrangements of matter are conscious (which would seem to remove any explanatory reason for believing in panpsychism in the first place). 

On page 150 to 153 Goff shows himself to be an uncritical consumer of one of the biggest legends of neuroscience, that split-brain patients have a dual consciousness. They have no such thing, as we can discover by watching Youtube interviews with split-brain patients who clearly have a single self. A scientific study published in 2017 set the record straight on split-brain patients. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.” The actual facts about split-brain surgery are related here by a surgeon who has performed such an operation. He states this about split-brain patients:

"After the surgery they are unaffected in everyday life, except for the diminished seizures. They are one person after the surgery, as they were before."

Panpsychism does very little to help with the explanatory problems in the philosophy of mind. The main reason is that it does not help with more than one of the ten problems listed at the beginning of this post. For example, panpsychism is worthless in explaining how humans are able to instantly retrieve memories, or why humans are able to form abstract ideas.

In the last paragraph of the book, Goff makes a pitch that kind of follows that classic salesman's advice to “sell the sizzle not the steak.” He states the following (imagine some violins playing as you read this passage):

Panpsychism offers a way of 're-enchanting the universe.' On the panpsychist view, the universe is like us; we belong in it. We need not live exclusively in the human realm, ever more diluted by globalization and consumerist capitalism. We can live in nature, in the universe. We can let go of nation and tribe, happy in the knowledge that there is a universe that welcomes us.”

But I fail to see any reason why a belief in panpsychism would produce any good change in human behavior. I can also imagine it having a bad effect. If you believe that all matter is conscious, you might have no particular guilt about killing someone. You might think to yourself: “He will still be conscious, even if I kill him, because all matter is conscious.” Similarly, if you believe that all matter is conscious, you might think it would be no great tragedy if all humanity were to become extinct, on the grounds that this would produce only a slight reduction in the total consciousness that exists in the universe (humanity having less than .0000000000000000000000000000000000001 of the universe's matter).

When panpsychists use simplistic shrinkage to describe mind as a mere "property" or "feature," it is like someone telling you that New York City is just a geographical coordinate, or like someone telling you that Brazil is just a pair of sounds someone can make with his mouth. 

Scientific American has an interview with Goff about his book. Goff states the following;

"The basic commitment is that the fundamental constituents of reality—perhaps electrons and quarks—have incredibly simple forms of experience. And the very complex experience of the human or animal brain is somehow derived from the experience of the brain’s most basic parts."

We can try to imagine such a whimsical possibility. A quark might have an experience of a dull, static existence stuck inside an atomic nucleus. An electron might have an experience of constantly whizzing around a nucleus at incredible speeds, like some person stuck on an amusement park ride. Or a neuron might have an experience of just sitting there motionless inside a brain.  If there were billions or trillions or quadrillions of such tiny micro-experiences, they would never add up to anything like the experience of being a mobile thinking human free to walk around anywhere he wishes. 

Saturday, December 7, 2019

The Guy with the Smallest Brain Had the Highest IQ

According to the theory that your brain creates your mind and stores your memories, we should expect that removal of half of the brain should have the most drastic effect on memory and intelligenc. But at the link here and the link here you can read about many cases showing a good preservation of memory and intelligence even after half a brain was removed to treat epileptic seizures. 

There is a new study relating to the topic of intelligence and removal of half of the brain.  Once again, the study reports facts shockingly inconsistent with standard claims that the brain is the source of the human mind. But the press reporting on this study is feeding us a kind of "cover story" trying to explain away the shocking result.  Upon close inspection, this "cover story" falls apart. 

The study involved brain scans of six patients who had half of their brains removed.  Table S3 of the supplemental information of the study reveals that the intelligence quotients (IQ scores) of the six subjects were 84, 95, 91, 99,  96 and 80. So most of the six were fairly smart, even though half of their brains were gone.  How could this be when half of their brains were missing? 

In stories such as the story in Discover magazine, it is suggested that "brain rewiring" can explain such a thing. The story states the following:

"In a study published Tuesday in Cell Reports, scientists studied six of these patients to see how the human brain rewires itself to adapt after major surgery. After performing brain scans on the patients, the researchers found that the remaining hemisphere formed even stronger connections between different brain networks — regions that control things like walking, talking and memory —  than in healthy control subjects. And the researchers suggest that these connections enable the brain, essentially, to function as if it were still whole."

The summary above is not accurate, as it tells a story that is not true for one of the six patients, as I will explain below. This hard-to-swallow story (repeated by the New York Times) is reassuring if you wish to keep believing that the brain is the source of your mind.  The person who buys such a story can reassure himself kind of like this:

"How do people stay smart when you take out half of their brain? It's simple: the brain just rewires itself so that the half works as good as a whole. It acts kind of like a computer that reprograms itself to keep functioning like normal when you yank out half of its components."

We know of no machines ever built that have such a capability.  All brains engage in some "brain rewiring" every year, so any mental effect can always be attributed to "brain rewiring." We cannot dream of how a brain could possibly be clever enough to rewire itself to perform just as well when half of its matter was removed.   When we take a close look at the data in the study, it shows that this "brain rewiring" story does not hold up for the smartest subject in the study. 

In Table S4 of the study, we have measurements based on brain scanning, designed to show the level of connectivity in the brains of the six subjects.  Some of the six subjects have a slightly higher average connectivity score, but it's not very much higher.  The average connectivy scores for the controls with normal brains were .30 and .35.  The average connectivity scores for the six patients with half a brain were .43, .45, .35, .30, .43, and .41.  So it was merely true that the average brain connectivity score of the patients with half a brain was slightly higher than the normal controls.  And when we look at another metric (the "max" score listed at the end of Table S4), we see that all of the half-brain subjects had lower "brain connectivity" scores than the controls.  The "max" connectivy scores for the controls with normal brains were .90 and .74, but the "max" connectivity scores for the six patients with half a brain were only .57, .67, .49, .51, .63, and .62.  So the evidence for greater brain connectivity or "nicely rewired brains" after removal of half a brain is actually quite thin. 

Interestingly, the half-brain patient with the highest intelligence (labeled as HS4, with an IQ of 99) had an average brain connectivity score of only .30, which is the same as one of the group of controls with normal brains, and less than the brain connectivity of the other group of controls with normal brains.   So the smartest person with half a brain (who had an IQ of 99) did not at all have any greater brain connectivity that can explain his normal intelligence with only half a brain.  How can this subject HS4 have had a normal intelligence with only half a brain?  In this case, favorable brain rewiring or greater brain connectivity cannot explain the result.   So the "cover story" of "their brains rewired to keep them smart" falls apart. 

half brain
The half brain of subject HS4, IQ of 99, average brain wiring

The only way we can explain such results is by postulating that the human brain is not actually the source of the human mind.  If the human brain is neither the source of the human mind nor the storage place of memories, we should not find any of the results mentioned in this post to be surprising. 

Subject HS4 is not by any means the most remarkable case of a patient with half a brain and a good mind. The study here is entitled "Development of above normal language and intelligence 21 years after left hemispherectomy."  After they removed the part of the brain claimed to be the "center of language," a subject developed "above normal" language and intelligence. 

Then there is the case of Alex who did not start speaking until the left half of his brain was removed. A scientific paper describing the case says that Alex “failed to develop speech throughout early boyhood.” He could apparently say only one word (“mumma”) before his operation to cure epilepsy seizures. But then following a hemispherectomy (also called a hemidecortication) in which half of his brain was removed at age 8.5, “and withdrawal of anticonvulsants when he was more than 9 years old, Alex suddenly began to acquire speech.” We are told, “His most recent scores on tests of receptive and expressive language place him at an age equivalent of 8–10 years,” and that by age 10 he could “converse with copious and appropriate speech, involving some fairly long words.” Astonishingly, the boy who could not speak with a full brain could speak well after half of his brain was removed. The half of the brain removed was the left half – the very half that scientists tell us is the half that has more to do with language than the right half. 

What is also interesting in the new study is that when we cross-compare Figure 1 with Table S3 (in the supplemental information) we find that the patient with the largest brain (after the hemispherectomy operation) had the lowest IQ, and that the patient with the smallest brain had the highest IQ.  In Figure 1 the brain of the subject with an IQ of 80 (subject HS6) looks much larger than the brain of the subject with an IQ of 99 (subject HS4).  Such a result is not suprising under the hypothesis that your brain is not the source of your mind.  It should also be not surprising to anyone who considers the fact that the brain of the Neanderthals (presumably not as smart as we modern humans) was substantially larger than the brain of modern humans. 

Saturday, November 9, 2019

The Lack of Evidence for Memory-Storage Engram Cells

There are some very good reasons for thinking that long-term memories cannot be stored in brains, which include:
  • the impossibility of credibly explaining how the instantaneous recall of some obscure and rarely accessed piece of information could occur as a neural effect, in a brain that is without any indexing system and subject to a variety of severe signal slowing effects;
  • the impossibility of explaining how reliable accurate recall could occur in a brain subject to many types of severe noise effects;
  • the short lifetimes of proteins in synapses, the place where scientists most often claim our memories are stored;
  • the lack of any credible theory explaining how memories could be translated into neural states;
  • the complete failure to ever find any brain cells containing any encoded information in neurons or synapses other than the genetic information in DNA;
  • the lack of any known read or write mechanism in a brain.
But scientists occasionally produce research papers trying to persuade us that memories are stored in a brain, in cells that are called "engram cells." In this post, I will discuss why such papers are not good examples of experimental science, and do not provide any real evidence that a memory was stored in a brain. I will discuss seven problems that we often see in such science papers. The "sins" I refer to are merely methodological sins rather than moral sins. 

Sin #1: assuming or acting as if a memory is stored in some exact speck-sized spot of a brain without any adequate basis for such a “shot in the dark” assumption.

Scientists never have a good basis for believing that a particular memory is stored in some exact tiny spot of the brain. But a memory experiment will often involve some assumption that a memory is stored in one exact spot of the brain (such as some exact spot of a cubic millimeter in width). For example, an experimental study may reach some conclusion (based on inadequate evidence) about a memory being stored in some exact tiny spot of the brain, and then attempt to reactivate that memory by electrically or optogenetically stimulating that exact tiny spot.

The type of reasoning that is used to justify such a “shot in the dark” assumption is invariably dubious. For example, an experiment may observe parts of a brain of an animal that is acquiring some memory, and look for some area that is “preferentially activated.” But such a technique is as unreliable as reading tea leaves. When brains are examined during learning activities, brain regions (outside of the visual cortex) do not actually show more than a half of 1% signal variation. There is never any strong signal allowing anyone to be able to say with even a 25% likelihood that some exact tiny part of the brain is where a memory is stored. If a scientist picks some tiny spot of the brain based on “preferential activation” criteria, it is very likely that he has not picked the correct location of a memory, even under the assumption that memories are stored in brains. Series of brains scans do not show that some particular tiny spot of the brain tends to repeatedly activate to a greater degree when some particular memory is recalled. 

Sin #2: Either a lack of a blinding protocol, or no detailed discussion of how an effective technique for blinding was achieved.

Randomization and blinding techniques are a very important scientific technique for avoiding experimenter bias. For example, what is called the “gold standard” in experimental drug studies is a type of study called a double-blind, randomized experiment. In such a study, both the doctors or scientific staff handing out pills and the subjects taking the pills do not know whether the pills are the medicine being tested or a placebo with no effect.

If similar randomization and blinding techniques are not used in a memory experiment, there will be a high chance of experimenter bias. For example, let's suppose a scientist looks for memory behavior effects in two groups of animals, the first being a control group having no stimulus designed to affect memory, and the second group having a stimulus designed to affect memory. If the scientist knows which group is which when analyzing the behavior of the animals, he will be more likely to judge the animal's behavior in a biased way, so that the desired result is recorded.

A memory experiment can be very carefully designed to achieve this blind randomization ideal that minimizes the chance of experimenter bias. But such a thing is usually not done in memory experiments purporting to show evidence of a brain storage of memories. Scientists working for drug trials are very good about carefully designing experiments to meet the ideal of blind randomization, because they know the FDA will review their work very carefully, rejecting the drug for approval if the best experimental techniques were not used. But neuroscientists have no such incentive for experimental rigor.

Even in studies where some mention is made of a blinding protocol, there is very rarely any discussion of how an effective protocol was achieved. When dealing with small groups of animals, it is all too easy for a blinding protocol to be ineffective and worthless. For example, let us suppose there is one group of 10 mice that have something done to their brains, and some other control group that has no such done thing. Both may be subjected to a stimulus, and their “freezing behavior” may be judged. The scientists judging such a thing may be supposedly “blind” to which experimental group is being tested. But if a scientist is able to recognize any physical characteristic of one of the mice, he may actually know which group the mouse belongs to. So it is very easy for a supposed blinding protocol to be ineffective and worthless. What is needed to have confidence in such studies is not a mere mention of a blinding protocol, but a detailed discussion of exactly how an effective blinding protocol was achieved. We almost never get such a thing in memory experiments. The minority of them that refer to a blinding protocol almost never discuss in detail how an effective blinding protocol was achieved, one that really prevented scientists from knowing something that might have biased their judgments. 

For an experiment that judges "freezing behavior" in rodents, an effective blinding protocol would be one in which such freezing was judged by a person who never previously saw the rodents being tested. Such a protocol would guarantee that there would be no recognition of whether the animals were in an experimental group or a control group. But in "memory engram" papers we never read that such a thing was done.  To achieve an effective blinding protocol, it is not enough to use automated software for judging freezing, for such software can achieve biased results if it is run by an experimenter who knows whether or not an animal was in a control group. 

Sin #3: inadequate sample sizes, and a failure to do a sample size calculation to determine how large a sample size to test with.

Under ideal practice, as part of designing an experiment a scientist is supposed to perform what is called a sample size calculation. This is a calculation that is supposed to show how many subjects to use per study group to provide adequate evidence for the hypothesis being tested. Sample size calculations are included in rigorous experiments such as experimental drug trials.

The PLOS paper here reported that only one of the 410 memory-related neuroscience papers it studied had such a calculation. The PLOS paper reported that in order to achieve a moderately convincing statistical power of .80, an experiment typically needs to have 15 animals per group; but only 12% of the experiments had that many animals per group. Referring to statistical power (a measure of how likely a result is to be real and not a false alarm), the PLOS paper states, “no correlation was observed between textual descriptions of results and power.” In plain English, that means that there's a whole lot of BS flying around when scientists describe their memory experiments, and that countless cases of very weak evidence have been described by scientists as if they were strong evidence.

The paper above seems to suggest that 15 animals per study group is needed.  But In her post “Why Most Published Neuroscience Findings Are False,” Kelly Zalocusky PhD calculates (using Ioannidis’s data) that the median effect size of neuroscience studies is about .51. She then states the following, talking about statistical power:

"To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous. To bump our power up to 0.5, we would need an n of 31 per group. A power of 0.8 would require 60 per group."

So the number of animals per study group for a moderately convincing result (one with a statistical power of .80) is more than 15 (according to one source), and something like 60, according to another source.  But the vast majority of "memory engram" papers do not even use 15 animals per study group.

Sin #4: a high occurrence of low statistical significance near the minimum of .05, along with a frequent hiding of such unimpressive results, burying them outside of the main text of a paper rather than placing them in the abstract of the paper.

Another measure of how robust a research finding is the statistical significance reported in the paper. Memory research papers often have marginal statistical significance close to .05.

Nowadays you can publish a science paper claiming a discovery if you are able to report a statistical significance of only .05. But it has been argued by 72 experts that such a standard is way too loose, and that things should be changed so that a discovery can only be claimed if a statistical significance of .005 is reached, which is a level ten times harder to achieve.

It should be noted that it is a big misconception that when you have a result with a statistical significance (or P-value) of .05, this means there is a probability of only .05 that the result was a false alarm and that the null hypothesis is true. This paper calls such an idea “the most pervasive and pernicious of the many misconceptions about the P value.” 

When memory-related scientific papers report unimpressive results having a statistical significance such as only .03, they often make it hard for people to see this unimpressive number. An example is the recent paper “Arti´Čücially Enhancing and Suppressing Hippocampus-Mediated Memories.”  Three of the four statistical significance levels reported were only .03, but this was not reported in the summary of the paper, and was buried in hard-to-find places in the text.

Sin #5: using presumptuous or loaded language in the paper, such as referring in the paper to the non-movement of an animal as “freezing” and referring to some supposedly "preferentially activated" cell as an "engram cell." 

Papers claiming to find evidence of memory engrams are often guilty of using presumptuous language that presupposes what they are attempting to prove. For example,  the non-movement of a rodent in an experiment is referred to by the loaded term "freezing," which suggests an animal freezing in fear, even though we have no idea whether the non-movement actually corresponds to fear.  Also, some cell that is guessed to be a site of memory storage (because of some alleged "preferential activation" that is typically no more than a fraction of 1 percent) is referred to repeatedly in the papers as an "engram cell,"  which means a memory-storage cell, even though nothing has been done to establish that the cell actually stores a memory. 

We can imagine a psychology study using similar loaded language.  The study might make hidden camera observations of people waiting at a bus stop.  Whenever the people made unpleasant expressions, such expressions would be labeled in the study as "homicidal thoughts."  The people who had slightly more of these unpleasant expressions would be categorized as "murderers."   The study might say, "We identified two murderers at the bus stop from their increased display of homicidal expressions." Of course, such ridiculously loaded, presumptuous language has no place in a scientific paper.  It is almost as bad for "memory engram" papers to be referring so casually to "engram cells" and "freezing" when neither fear nor memory storage at a specific cell has been demonstrated.  We can only wonder whether the authors of such papers were thinking something like, "If we use the phrase engram cells as much as we can, maybe people will believe we found some evidence for engram cells." 

Sin #6: failing to mention or test alternate explanations for the non-movement of an animal (called “freezing”), explanations that have nothing to do with memory recall.

A large fraction of all "memory engram" papers hinge on judgments that some rodent engaged in increased "freezing behavior,"  perhaps while some imagined "engram cells" were electrically or optogenetically stimulated. A science paper says that it is possible to induce freezing in rodents by stimulating a wide variety of regions. It says, "It is possible to induce freezing by activating a variety of brain areas and projections, including the hippocampus (Liu et al., 2012), lateral, basal and central amygdala (Ciocchi et al., 2010); Johansen et al., 2010; Gore et al., 2015a), periaqueductal gray (Tovote et al., 2016), motor and primary sensory cortices (Kass et al., 2013), prefrontal projections (Rajasethupathy et al., 2015) and retrosplenial cortex (Cowansage et al., 2014).” 

But we are not informed of such a reality in quite a few papers claiming to supply evidence for an engram. In such studies typically a rodent will be trained to fear some stimulus. Then some part of the rodent's brain will be stimulated when the stimulus is not present. If the rodent is nonmoving (described as "freezing") more often than a rodent whose brain is not being stimulated, this is hailed as evidence that the fearful memory is being recalled by stimulating some part of the brain.  But it is no such thing. For we have no idea whether the increased freezing or non-movement is being produced merely by the brain stimulation, without any fear memory, as so often occurs when different parts of the brain are stimulated.

If a scientist thinks that some tiny part of a brain stores a memory, there is an easy way to test whether there is something special about that part of the brain. The scientists could do the "stimulate cells and test fear" kind of test on multiple parts of the brain, only one of which was the area where the scientist thought the memory was stored. The results could then be compared, to see whether stimulating the imagined "engram cells" produced a higher level of freezing than stimulating other random cells in the brain. Such a test is rarely done. 

Sin #7: a dependency on arbitrarily analyzed brain scans or an uncorroborated judgment of "freezing behavior" which is not a reliable way of measuring fear.

A crucial element of a typical "memory engram" science paper is a judgment of what degree of "freezing behavior" a rodent displayed.  The papers typically equate non-movement with fear coming from recall of a painful stimulus. This doesn't make much sense. Many times in my life I saw a house mouse that caused me or someone else to shreik, and I never once saw a mouse freeze. Instead, they seem invariably to flee rather than to freeze. So what sense does it make to assume that the degree of non-movement ("freezing") of a rodent should be interpreted as a measurement of fear?  Moreover, judgments of the degree of "freezing behavior" in mice are too subjective and unreliable. 

Fear causes a sudden increase in heart rate in rodents, so measuring a rodent's heart rate is a simple and reliable way of corroborating a manual judgment that a rodent has engaged in increased "freezing behavior." A scientific study showed that heart rates of rodents dramatically shoot up instantly from 500 beats per minute to 700 beats per minute when the rodent is subjected to the fear-inducing stimuli of an air puff or a platform shaking. But rodent heart rate measurements seem to be never used in "memory engram" experiments. Why are the researchers relying on unreliable judgments of "freezing behavior" rather than a far-more-reliable measurement of heart rate, when determining whether fear is produced by recall? In this sense, it's as if the researchers wanted to follow a technique that would give them the highest chance of getting their papers published, rather than using a technique that would give them the most reliable answer as to whether a mouse is feeling fear. 

animal freezing

Another crucial element of many "memory engram" science papers is analysis of brain scans.  But there are 1001 ways to analyze the data from a particular brain scan.  Such flexibility almost allows a researcher to find whatever "preferential activation" result he is hoping to find.  

Page 68 of this paper discusses how brain scan analysis involves all kinds of arbitrary steps:

"The time series of voxel changes may be motion-corrected, coregistered, transformed to match a prototypical brain, resampled, detrended, normalized, smoothed, trimmed (temporally or spatially)...Furthermore, each of these steps can be done in a number of ways, each with many free parameters that experimenters set, often arbitrarily....The wholebrain analysis is often the first step in defining a region of interest in which the analyses may include exploration of time courses, voxelwise correlations, classification using support vector machines or other machine learning methods, across-subject correlations, and so on. Any one of these analyses requires making crucial decisions that determine the soundness of the conclusions."

The problem is that there is no standard way of doing such things. Each study arbitrarily uses some particular technique, and it is usually true that the results would have been much different if some other brain scan analysis technique had been used. 

Examples of Such Shortcomings

Let us look at a recent paper that claimed evidence for memory engrams. The paper stated, “Several studies have identified engram cells for different memories in many brain regions including the hippocampus (Liu et al., 2012; Ohkawa et al., 2015; Roy et al., 2016), amygdala (Han et al., 2009; Redondo et al., 2014), retrosplenial cortex (Cowansage et al., 2014), and prefrontal cortex (Kitamura et al., 2017).” But the close examination below will show that none of these studies are robust evidence for memory engrams in the brain. 

Let's take a look at some of these studies. The Kitamura study claimed to have “identified engram cells” in the prefrontal cortex is the study “Engrams and circuits crucial for systems consolidation of a memory.”  In Figure 1 (containing multiple graphs), we learn that the number of animals used in different study groups or experimental activities were 10, 10, 8, 10, 10, 12, 8, and 8, for an average of 9.5. In Figure 3 (also containing multiple subgraphs), we have even smaller numbers. The numbers of animals mentioned in that figure are 4, 4, 5, 5, 5, 10, 8, 5, 6, 5 and 5. None of these numbers are anything like what would be needed for a moderately convincing result, which would be a minimum of 15 animals per study group. So the study is very guilty of Sin #3. The study is also guilty of Sin #2, because no detailed description is given of an effective blinding protocol. The study is also guilty of Sin #4, because Figure 3 lists two statistical significance values of “< 0.05” which is the least impressive result you can get published nowadays. Studies reaching a statistical significance of less than 0.01 will always report such a result as “< 0.01” rather than “<0.05.”  The study is also guilty of Sin #7, because it relies on judgments of freezing behavior of rodents, which were not corroborated by something such as heart rate measurements. 

The Liu study claimed to have “identified engram cells” in the hippocampus of the brain is the study “Optogenetic stimulation of a hippocampal engram activates fear memory recall.” We see in Figure 3 that inadequate sample sizes were used. The number of animals listed in that figure (during different parts of the experiments) are 12, 12, 12, 5, and 6, for an average of 9.4. That is not anything like what would be needed for a moderately convincing result, which would be a minimum of 15 animals per study group. So the study is  guilty of Sin #3. The study is also guilty of Sin #7. The experiment relied crucially on judgments of fear produced by manual assessments of freezing behavior, which were not corroborated by any other technique such as heart-rate measurement. The study does not describe in detail any effective blinding protocol, so it is also guilty of Sin #2. The study is also guilty of Sin #6. The study involved stimulating certain cells in the brains of mice, with something called optogenetic stimulation. The authors have assumed that when mice freeze after stimulation, that this is a sign that they are recalling some fear memory stored in the part of the brain being stimulated. What the authors neglect to tell us is that stimulation of quite a few regions of a rodent brain will produce freezing behavior. So there is actually no reason for assuming that a fear memory is being recalled when the stimulation occurs. 

The Ohkawa study claimed to have “ identified engram cells” in the hippocampus of the brain is the study “Artificial Association of Pre-stored Information to Generate a Qualitatively New Memory.” In Figure 3 we learn that the animal study groups had a size of about 10 or 12, and in Figure 4 we learn that the animal study groups used were as small as 6 or 8 animals. So the study is guilty of Sin #3. Because the paper used a “zap their brains and look for freezing” approach, without discussing or testing alternate explanations for freezing behavior having nothing to do with memory, the Ohkawa study is also guilty of Sin #6. Judgment of fear is crucial to the experimental results, and it was done purely by judging "freezing behavior," without measurement of heart rate.  So the study is also guilty of Sin #7. This particular study has a few skimpy phrases which claims to have used a blinding protocol: “Freezing counting experiments were conducted double blind to experimental group.” But no detailed discussion is made of how an effective blinding protocol was achieved, so the study is also guilty of Sin #2.

The Roy study claimed to have “identified engram cells” in the hippocampus of the brain is the study "Memory retrieval by activating engram cells in mouse models of early Alzheimer’s disease."  Looking at Figure 1, we see that the study groups used sometimes consisted of only 3 or 4 animals, which is a joke from any kind of statistical power standpoint. Looking at Figure 3, we see the same type of problem. The text mentions study groups of only "3 mice per group," "4 mice per group," and "9 mice per group,"  and "10 mice per group."   So the study is guilty of Sin #3. Although a blinding protocol is mentioned in the skimpiest language,  no detailed discussion is made of how an effective blinding protocol was achieved, so the study is also guilty of Sin #2.  Some of the results reported have a statistical significance of only "<.05," so the study is guilty of Sin #4. 

The Han study (also available here) claimed to have “identified engram cells” in the amygdala is the study "Selective Erasure of a Fear Memory." In Figure 1 we see a larger-than average sample size was used for two groups (17 and 24), but that a way-too-small sample size of only 4 was used for the corresponding control group. You need a sufficiently high number of animals in all study groups, including the control group, for a reliable result.  The same figure tells us that in another experiment the number of animals in the study group were only 5 or 6, which is way too small. Figure 3 tells us that in other experiments only 8 or 9 mice were used, and Figure 4 tells us that in other experiments only 5 or 6 mice were used. So this paper is guilty of Sin #3. No mention is made in the paper of any blinding protocol, so this paper is guilty of Sin #2. Figure 4 refers to two results with a borderline statistical significance of only "< 0.05," so this paper is also guilty of Sin #4.  The paper relies heavily on judgments of fear in rodents, but these were uncorroborated judgments based on "freezing behavior," without any measure of heart rate to corroborate such judgments. So the paper is also guilty of Sin #7. 

The Redondo study claimed to have “identified engram cells” in the amygdala is the study "Bidirectional switch of the valence associated with a hippocampal contextual memory engram."  We see 5 or 6 results reported with a borderline statistical significance of only "< 0.05," so this paper is  guilty of Sin #4. No detailed description is given of how an effective blinding protocol was achieved, and only the skimpiest mention is made of blinding, so this paper is guilty of Sin #2.  The study used only "freezing behavior" to try to measure fear, without corroborating such a thing by measuring heart rates.  So the paper was guilty of Sin #7.  The study involved stimulating certain cells in the brains of mice, with something called optogenetic stimulation. The authors have assumed that when mice freeze after stimulation, that this is a sign that they are recalling some fear memory stored in the part the brain being stimulated. What the authors neglect to tell us is that stimulation of quite a few regions of a rodent brain will produce freezing behavior. So there is actually no reason for assuming that a fear memory is being recalled when the stimulation occurs.  So the study is also guilty of Sin #6. 

The Cowansage study claimed to have “identified engram cells” in the retrosplinial cortex of the brain is the study "Direct Reactivation of a Coherent Neocortical Memory of Context." Figure 2 tells us that only 12 mice were used for one experiment. Figure 4 tells us that only 3 and 5 animals were used for other experiments. So this paper is guilty of Sin #3. No detailed description is given of how an effective blinding protocol was achieved, and only the skimpiest mention is made of blinding, so this paper is guilty of Sin #2.    It's a paper using the same old "zap rodent brains and look for some freezing behavior" methodology, without explaining why such results can occur for reasons having nothing to do with memory recall. So the study is guilty of Sin #6. Some of the results reported have a statistical significance of only "<.05," so the study is guilty of Sin #4. 

So I have examined each of the papers that were claimed as evidence for memory traces or engrams in the brain. Serious problems have been found in every one of them.  Not a single one of the studies made a detailed description of how an effective blinding protocol was executed. All of the studies were guilty of Sin #7.  Not a single one of the studies makes a claim to have followed some standardized method of brain scan analysis. Whenever there are brain scans we can say that the experiments merely chose one of 101 possible ways to analyze brain scan data. Not a single one of the studies has corroborated "freezing behavior" judgments by measuring heart rates of rodents to determine whether the animals suddenly became afraid. But all of the studies had a depenency on either brain scanning, uncorroborated freezing behavior judgments, or both. The studies all used sample sizes far too low to get a reliable result (although one of them used a decent sample size to get part of its results). 

The papers I have discussed are full of problems, and do not provide robust evidence for any storage of memories in animal brains. There is no robust evidence that memories are stored in the brains of any animal, and no robust evidence that any such thing as an "engram cell" exists. 

The latest press report of a "memory wonder" produced by scientists is a claim that scientists implanted memories in the brains of songbirds. For example, the Scientist magazine has an article entitled, "Researchers Implant Memories in Zebra Finch Brains." The relevant scientific study is hidden behind a paywall of Science magazine. But by reading the article, we can get enough information to have the strongest suspicion that the headline is a bogus brag. 

Of course, the scientists didn't actually implant musical notes into the brains of birds.  Nothing of the sort could ever occur, because no one has the slightest idea of how learned or episodic information could ever be represented as neural states. The scientists merely gave little bursts of energy into the brains of some birds. The scientists claimed that the birds who got shorter bursts of energy tended to sing shorter songs. "When these finches grew up, they sang adult courtship songs that corresponded to the duration of light they’d received," the story tells us.  Of course, it would be not very improbable that such a mere "duration similarity" would occur by chance.  

It is very absurd to be describing such a mere "duration similarity" as a memory implant.  It was not at all true that the birds sung some melody that had been artifically implanted in their heads.  The scientists in question have produced zero evidence that memories can be artificially implanted in animals.  From an example like this, we get the impression that our science journalists will uncritically parrot any claim of success in brain experiments with memory, no matter how glaring are the shortcomings of the relevant study. 

There is no robust evidence for engram cells, and those who have tried to present evidence for memory storage cells have never been able to articulate a coherent detailed theory about how human memory experiences or human learned knowledge could ever be translated into neural states or cell states.  The engram theorist is therefore like a person who claims there is evidence for a city floating way up in the sky, but who is unable to tell you how a city could be floating in the air. 

Saturday, September 21, 2019

The Dubious Dogma That Brains Make Decisions

Neuroscientists like to claim that thoughts and ideas come from your brain, that your memories are stored in your brain, and that when you remember you are retrieving information from your brain. I have discussed in other posts why such claims are not well founded in observations, and why there are strong reasons for rejecting or doubting all such claims. In this post I will discuss another dogmatic claim made about the brain: the claim that the brain is the source of human decisions. I will discuss eight reasons for thinking that this claim is no better founded than dogmatic claims about the brain being the storage place of memories or the source of human abstract thoughts.

Reason #1: Scientists have no understanding of how neurons could make a decision.

When they try to present low-level explanations for a how a brain could do some of the things that they attribute to brains, our neuroscientists falter and fail. An example is their complete failure to credibly explain either how memories could be encoded in neural states, how memories could be permanently stored in brains, or how memories could be instantly recalled by brains. Neuroscientists also cannot credibly explain how a person could make a decision when faced with multiple choices. 

When I did a Google search for "what happens in the brain when a decision is made," I got a bunch of articles with confident sounding titles. But reading the stories I read mainly what sounded like  bluffing, hype, promissory sounds,  and the kind of talk someone uses to persuade you he understands something he doesn't actually understand (along with some references to brain scanning studies that aren't robust for reasons discussed later in this post).  At no one point in these articles do we ever reach someone who makes us think, "This guy really understands how a brain could reach a decision." 

Let us consider a simple example. Joe says to himself, “Today I can either go to the library or go to see a movie.” He then decides to go to the library, and then starts walking towards the library.

To explain this neurally, we would have to explain several different things:

Item 1: How Joe's brain could hold two different ideas, the idea about the possibility of going to the movie, and the idea of going to the library.
Item 2: The appearance in Joe's brain of a third idea, an idea that he will go today to the library.
Item 3: Some neural act that causes his muscles to move in a way corresponding to his idea about going to a library.

The first two of these things cannot be credibly explained through any low-level explanation involving neurons or synapses. See my post “No One Understands How a Brain Could Generate Ideas” for a discussion of the failure of neuroscientists to present any credible explanations of how brains could generate ideas. In that post, I cite some “expert answers” pages on which the experts address exactly the question of how a brain could generate ideas, and sound exactly as if they have no understanding of such a thing.

On one of the “expert answers” pages that I cite, we have this revealing answer:

'How does the 'brain' forms new ideas?' is the wrong question. We don't actually know how the brain codes old ideas.”

That is correct, which means that neither Item 1 in my list above can be explained neurally, nor Item 2. Since we do not understand how a brain could either hold ideas or form new ideas, we do not have any understanding of how a brain could make a decision.

Reason #2: Hemispherectomy patients can still make decisions just fine.

Hemispherectomy is an operation done on patients with severe epileptic seizures. In an hemispherectomy operation, half of the brain is removed. I can find no studies that have specifically studied decision-making ability in hemispherectomy patients. However, I have cited here and here and here and here scientific papers that show results for intelligence tests taken “before” and “after” a hemispherectomy operation. Such papers show, surprisingly, that removing half of a brain has little effect on intelligence as measured in IQ tests.

Written IQ tests are typically tests of not just intelligence but also decision making ability. For example, the Wechsler IQ test is by far the most common one used by scientists, and it is a multiple-choice test. Every single time a person has to pencil in one of the little ovals in a multiple-choice test, he has to make a decision. So standard IQ tests are very much tests of not just intelligence but also decision-making ability (which may be considered an aspect of intelligence).

Since IQ tests done on hemispherectomy patients show little damage to IQ scores after removing half of a brain, we can only conclude that removing half of a brain has little or no effect on decision making ability. We would not expect such a thing to be true if your brain is what makes your decisions.

Reason #3: Some people who lost most of their brains could still make decisions normally.

Cases of removal of half of the brain by surgical hemispherectomy are not at all the most dramatic cases of brain damage known to us. There are cases of patients who lost almost all of their brains due to diseases such as hydrocephalus, a disease that converts brain tissue to a watery fluid. Many such cases were studied by the physician John Lorber. He found that most of his patients were actually of above-average intelligence. Similarly, a French person working as a civil servant was found in recent years to have almost no functional brain.

Such cases seem to show that you can lose more than 75% of your brain and still have a normal decision making ability. This argues against claims that your brain is what is making your decisions.

Reason #4: Split brain patients don't have their decision making harmed.

The two hemispheres of the brain are connected by a set of thick fibers called the corpus callosum. In rare operations this set of fibers is surgically severed. The result is what called a split-brain patient. Despite the erroneous claims that are sometimes made about this topic, the fact is that such an operation absolutely does not result in anything like a split personality or a split consciousness or a split mind. Such an operation does not result in two minds causing conflicting decisions.

The scientific paper here (entitled "The Myth of Dual Consciousness in the Brain") sets the record straight, as did a scientific study published in 2017. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.”  Their study (entitled "Split brain: divided perception but undivided consciousness") can be read here“We have shown that severing the cortical connections between the two brain hemispheres does not seem to lead to two independent conscious agents within one brain,” the researchers said.

In 2014 the article on split-brain patients stated the following:

In general, split-brained patients behave in a coordinated, purposeful and consistent manner, despite the independent, parallel, usually different and occasionally conflicting processing of the same information from the environment by the two disconnected hemispheres...Often, split-brained patients are indistinguishable from normal adults.”

In the video here we see a split-brain patient who seems like a pretty normal person, not at all someone with “two minds." And at the beginning of the video here the same patient says that after such a split-brain operation “you don't notice it” and that you don't feel any different than you did before – hardly what someone would say if the operation had produced “two minds” in someone. And the video here about a person with a split brain from birth shows us what is clearly someone with one mind, not two. In these interviews, every single time the split-brain patients answer questions normally, they are showing their ability to make decisions normally. The mere act of answering questions always involves decisions about what to say and how to say it.

But this is not at all what we should expect from the assumption that the brain is the source of our decisions. If that assumption were true, a split-brain operation should cause two independent sources of decision-making that would have a tendency to conflict with each other.

Reason #5: “Decision zig-zag" is almost never observed in pressure situations, but we would expect it to be very common if different parts of the brain (or halves of the brain) were causing decisions. 

Here's a quick mental test I'd like you to try. If you can answer all the questions real quickly, in a small number of seconds, it will tend to show you're a smart person who can think fast.  Try it. 

1. Pick a color.
2. Pick a number between 1 and 10. 
3. Pick a planet.  
4. Pick a continent.
5. Pick a city.

Did you skip the test? No fair. It's easy -- go back and try it. 

Now, if you are like 90% of my readers, you were able to do this exercise real quickly, in less than 10 or 15 seconds. But we would not expect such a thing to be possible if your brain was making your decisions. For in that case, we would expect that different parts of the brain would be coughing up different decisions, leading to a result rather like this:

Pick a color? Uh, red -- no green - no blue - okay, red! 
Pick a number? Uh, 8! No, 6 ! No -- uh, 4!  No, 2!
Pick a planet? Merc -- no Jupiter -- no, Earth, no wait...
Pick a continent?  North -- no South -- no Eur -- no Afri -- no Asia!
Pick a city? New ... uh, no Shang... no Paris -- oops, no Moscow! 

As mentioned above, people who have half of their brains removed in hemispherectomy operations can make decisions normally. It therefore cannot be maintained that a decision requires a full brain. If you think that brains make decisions, you are forced to the idea that part of a brain (half a brain or less) can make a decision. But such an idea makes us ask: should not then people be overwhelmed by conflicting decision signals, sent by different parts of a brain? 

Consider the organization of the brain. There are two identical halves. Under the hypothesis that a half of a brain or less can make a decision, we would therefore expect to see very often something that we can call "decision zig-zag."  This would involve behavior in which an organism was flipping back and forth between two possible decisions, as if two physical areas of the brain were conflicting with each other, coming to separate decisions. We would expect to see this particularly often in "coin flip" kind of decisions in which one choice is not obviously better than another. 

But we rarely see such behavior in humans, whenever there is time pressure. It is true that given some important choice, and given the luxury of time to deliberate, a person may kind of go back-and-forth in his mind about what to do. For example, if you are accepted by two different colleges, you may kind of go back-and-forth in your mind, first favoring one choice, then another.  But whenever there is a tight time pressure, and people know there is only a very short time for a decision, humans typically behave with very little indecision. 

Scores on standardized tests such as SAT tests are an excellent gauge of how very infrequently high-performing humans engage in "decision zig-zag" under pressure situations.  In the reading and writing part of an SAT test, a student has to answer more than 100 questions in less than two hours. The questions are multiple choice questions, so doing the test requires making 100 decisions, each a decision about which of the choices to select. Each question typically requires 30 seconds or more of reading.  There is very little time for indecision. Every one who performs very well on the test (in the 90th percentile or higher) is making 100 or more decisions (about which answer to choose) with very little indecision.  Under such pressure situations, humans do not at all perform like they would perform if different halves or different parts of your brain were sending you different signals about what to do.  Humans instead act like beings with a single unified mind.  It would seem that if different parts or halves of a brain were determining what decision to make, there would be so much indecision and "decision zig-zag" that the average SAT score in the US would be at least 200 points lower than it is. 

Reason #6: There is no particular region of the brain that seems to be crucial to non-muscular decision making.

Some particular regions of the brain have been strongly associated with particular functions. For example, we know that the brain stem is strongly associated with autonomic activity that keeps the heart and lungs working. Any major damage to the brain stem usually causes death. We also know that the visual cortex is strongly associated with vision. But no strong associations have been established between any part of the brain and calm non-muscular decision making.  By "non-muscular decision making" I mean the type of thing that goes on when you silently pick a number between 1 and 10 or silently choose in the morning what you will eat for dinner.  

To get an idea of how weak is the neuroscience case that your brain makes decisions, we can look at an article in Psychology Today entitled “The Neuroscience of Making a Decision.” After referring to some brain region that might be involved in addiction, which has no general relevance to the issue of whether brains make decisions, we are referred to a study claiming that the striatum is involved in decision-making. It's a study used that only 7 rats, Since this is less half of the minimum number of animals per study group recommended for a modestly convincing result, the study provides no good evidence for a neural involvement in decision making.

Then the Psychology Today article refers to a brain-scanning study attempting to show that regions called the dorsolateral prefrontal cortex and the ventromedial prefrontal cortex have something to do with decision making. These are the two regions that are most commonly cited as being involved in decision making. A brain scanning study could only give robust evidence for some region being heavily involved in some activity if it were to show a strong percent signal change, rather than the weak signal change of only 1% or less that brain scanning studies typically show. In this case, the study does not even give a figure for the percent signal change. So it does not provide any robust evidence that the the dorsolateral prefrontal cortex or the ventromedial prefrontal cortex have something to do with decision making

Our Psychology Today article then concludes, having provided no real evidence that there is any such thing as a “neuroscience of decision making.”

This study examined six patients with damage to the dorsolateral prefrontal cortex, and found that they had an average IQ of 104, above the average of 100. Since filling out a written IQ test requires many cases of decision making (in regard to the answer given), such a result is incompatible with claims that the dorsolateral prefrontal cortex is some part of the brain particularly involved in decision making. This study says, “We have studied numerous patients with bilateral lesions of the ventromedial prefrontal (VM) cortex” and that “most of these patients retain normal intellect, memory and problem-solving ability in laboratory settings.” The meta-analysis here says that the ventromedial prefrontal cortex is the region of the brain "most commonly implicated in moral decision making," but says that there is a "lack of a significant cluster of activation" in this area, meaning that it doesn't actually light up more during brain scans. 

Failing to report any actual figures for percent signal changes (the number we need to know to judge whether some area of the brain is more involved in an activity), the same meta-analysis notes differences between its findings and the findings of other studies, highlighting how much these brain scan studies tend to conflict with each other. We read the following:

"As previously stated, Bzdok et al. (2012) found a cluster of activation in the rTPJ (BA 39), which we did not find. Another discrepancy between our ME activation clusters and Bzdok et al.’s (2012) for moral cognition are that they found a cluster of activation in the left amygdala, which we did not find. Also, Bzdok et al. (2012) reported activation in the precuneus, which was not found to be a cluster of significant activation for the ME experiments in our analysis."

Another example of a report of a supposed “neuroscience of decision making” is a Neuroscience News article here entitled, “Researchers Discover Decision Making Center of Brain.” We again have a reference to a mere brain scanning study. But this time the study has a graph that gives us the percent signal change that we need to judge whether robust evidence has been found. The graph shows that the percent signal change picked up by the brain scanning is only about a fraction of one percent, about 1 part in 300. That's no good evidence for anything, and could easily be the result of pure chance fluctuations.

Similarly weak results are found in this study, trying to use brain scanning to find some region of the brain more involved in decision making. The graph shows that the percent signal change picked up the brain scanning is only about a fraction of one percent, about 1 part in 300. That's no good evidence for anything, and could easily be the result of pure chance fluctuations. 

In Figure 3 of the study here, we get a brain scanning result for the percent signal change in activity for the dorsolateral prefrontal cortex. The graph shows a signal change of only about 1 part in 300 (about .3 percent). That's no good evidence for anything, and could easily be the result of pure chance fluctuations. 

Most of the studies that claim to show neural correlates of decision making are mainly finding either neural correlates of emotion (which can often be entangled with decision making) or neural correlates of muscle activation (often paired with decision making).  When I do a Google search for "neural correlates of motionless decision making," I am unable to find a single study testing such a thing. 

Reason #7: There is no convincing evidence of some type of change of brain state when a calm non-muscular decision is made.

By looking at brain scans, it is impossible to reliably predict when anyone made a non-muscular decision.  We should not be fooled by a certain type of brain scanning experiment with the following characteristics:

(1) The study will not be pre-registered, and will not publish in advance a specification of some particular type of brain activation signal that it is looking for (in some very specific little part of the brain) as a sign of when someone made a decision.
(2) The study will scan the brains of people as they made some decision in their minds. 
(3) Scientists will then examine the brain scans, looking for some particular tiny area of the brain that was more a tiny bit more active when the decisions were made. 
(4) The study will involve only a small number of subjects, maybe 10, 15, 20 or 25. 

Let me explain why this type of study is not at all good evidence for anything.  In any random brain there will be random fluctuations in activity from moment to moment.  Let us suppose a researcher has the freedom to compare any of 200 different little areas of the brain, looking for some area that has an increase in activity during some particular moment (such as when a decision is made). We would expect that purely by chance there would be some area that would show a tiny bit more activity during the particular moment being studied, even if it is not your brain that is making a decision. Similarly, if I use a machine that can detect minute fluctuations in temperature in the livers of 20 people while they are making a decision, and I have the freedom to check 200 different little regions of the liver, I will probably be able to find some tiny liver region which (purely by chance) had a minutely higher temperature when some decision was made. But this would do nothing to show that livers make decisions. 

A discussion of this issue can be found around page 23 of the technical paper here, where we read the following:

"With a plausible population correlation of 0.5, a 1000-voxel whole-brain analysis would require 83 subjects to achieve 80% power. A sample size of 83 is five times greater than the average used in the studies we surveyed: collecting this much data in an fMRI experiment is an enormous expense that is not attempted by any except a few major collaborative networks."

In other words, brain imaging studies tend to use only a fraction of the sample size they need, given the techniques they typically use.  It is possible to do a reliable study with a small sample size, if you limit the analysis to only one small tiny area of the brain. But that is almost never done.  

On page 33 the paper above states the following, giving us a strong reason for skepticism about brain scanning studies:

"In short, our exploration of power suggests that across-subject whole-brain correlation experiments are generally impractical: without adequate multiple comparisons correction they will have false positive rates approaching 100%, with adequate multiple comparisons correction they require 5 times as many subjects than what the typical lab currently utilizes."

The study here is an example of the type of unconvincing study I have just discussed. The authors scanned brains, looking for change in signal strength corresponding to whether some type of decision was made.  Having the freedom to check any of 200 or more brain regions (since their study was not a pre-registered study announcing its intention to look in only one little place in the brain), the authors found one or two tiny regions where there is an extremely small greater activation when a decision was made. The difference in signal strength (as reported in Figure 1 and figure 2) was only about .1 of 1 percent, which is about 1 part in 1000.  But we would expect a result as good as that by chance, because of random variations in little parts of the brain, even if brains do not actually make decisions. So the study does nothing at all to provide evidence that brains are making decisions.  The study say it did "whole brain analyses", but it used only 32 subjects, only a small fraction of the 83 subjects recommended above for a mere 1000-voxel "whole brain analysis" study.  

On page 68 of the book "Casting Light on the Dark Side of Brain Imaging," we read about another problem in brain scanning studies:

"Take a guess at how many ways we can analyze data from a single brain scan. Theoretically countless, practically at least 69,000 ways....Brain imaging data usually requires between 6 and 10 steps of general data preparation and analysis. Researchers can perform any of these steps in a variety of ways....Different choices in data processing and analysis can lead to widely divergent results: small variations can quickly sum to form large discrepancies. In some cases, researchers may run many variations of an analysis, but only report results that support their hypothesis. This practice can lead to biased publications that overestimate true effects." 

Here is the kind of thing we would like to have in order to have convincing evidence of greater brain activity when a non-muscular decision is made:

(1) There would have to be many replicated pre-registered studies that all showed that some particular region of the brain activated at a substantially higher level when a decision was made (more than just a fraction of 1 percent). 
(2) In the pre-registration declarations, published prior to the collection of any data, the study authors would have to announce that they were studying only one small region of the brain to see whether it activates more during decision making, rather than giving themselves the freedom to check any brain region they wanted in a "fishing expedition" kind of approach to produce signal variations we would expect by chance.  
(3) In the same  pre-registration declarations, published prior to the collection of any data, the study authors would have to commit to one exact method of data analysis, precisely spelled out, thereby depriving themselves of the freedom to keep "slicing and dicing" the brain scan data until they got a result supporting their hypothesis. 

 Nothing like this has occurred.  Instead we have a succession of little brain scan studies (usually with low statistical power) showing minute less-than-one-percent activation increases in some region that differs from study to study, studies in which researchers are free to look for some minute signal deviation in any brain region, and free to try dozens of data analysis methods until something that can be called a neural correlation coughs up.  The results of such studies are what what we would expect to get by chance even if brains are not actually making decisions. 

In short, we have no robust evidence that brains make decisions. Nature never told us that decisions are made by brains. It is merely neuroscientists who told us such a thing, without ever having adequate evidence for such a claim. 

Reason #8: Humans can make decisions many times more quickly than they would make if decisions were being made by brains subject to several severe signal slowing factors and severe signal noise. 

Humans can make decisions very, very fast. Every time some one drives in the city, he is making important decisions very quickly, such as whether to brake at a particular moment. Every time some one speaks very quickly in conversation, he is making many instantaneous decisions about what words to use. People such as  quarterbacks and soccer players, standardized test takers,  chess players in special "speed" matches, and players of the Jeopardy TV show are making decisions at a very fast speed, often instantaneously. 

If you tried my previous selection game, you probably made decisions at a rate of about one decision per one or two seconds (each time you picked one of the possibilities, you were making a decision on what to pick).  If it took you 15 seconds to do that test, about two-thirds of that was reading and memory recall; and you were making a decision at a rate of about one decision per second.  A baseball hitter typically makes a decision (on whether to swing) in only a small fraction of a second. According to this scientific paper, "The average speech rate of adults in English is between 150 and 190 words per minute (Tauroza and Allison 1990), although in conversation this figure may rise considerably, reaching 200 wpm (Walker 2010; Laver 1994)."  Someone speaking in conversation at 200 words per minute is making decisions at a rate of three per second, each a decision on which word to use.  

But we have strong reasons for believing that brains should not be fast enough for instantaneous decisions. The "100 meters per second" claim often made about brain signal speed is not at all accurate, as it completely ignores very serious slowing factors such as the 200-times-slower speed of transmission through dendrites, the very serious slowing factor caused by cumulative synaptic delays, and the additional very serious slowing factor caused by what is called synaptic fatigue. A realistic calculation of brain signal speed (such as I have made here) leads to the conclusion that brains should be far too slow to allow extremely rapid or instantaneous decisions, and that if a brain were making a non-muscular decision (not involving a reflex), it should take at least five or ten seconds for any such decision.  

We also know that the brain has multiple sources of very severe signal noise, as discussed here. It would seem that all this noise would be a huge factor preventing different parts of a brain from reaching a single decision quickly, just as it would greatly decrease the chance of a classroom of 30 people reaching the same decision very quickly if all of the people were blaring different podcasts, videos and rock songs from their smartphones. 

An intelligent hypothesis about human decision making is that it comes from an immaterial aspect of human beings, what is commonly called a soul or spirit.  A neuroscientist will protest that it is forbidden to postulate some important reality that we cannot directly see. Such a rule is not at all followed in general by scientists. Astrophysicists and cosmologists nowadays are constantly claiming that most of the universe consists of important realities we cannot see, what they call dark matter and dark energy -- both things that have never been directly observed by any method.