Wednesday, May 25, 2022

Seven Things in Fast Retrieval Systems, None of Which Your Brain Has

Humans manufacture various types of fast-retrieval systems, such as computers and books. A simple book with page numbers and an index is a fast-retrieval system, allowing you to get information about some topic very quickly.  Below are seven things typically found in fast-retrieval systems. 

Characteristic #1: Addressing or Position Notation

Addressing is some setup where particular spots in a system have addresses or coordinates. In a book addressing is implemented as page numbers. Without such page numbers, you could never use the index of a book to very quickly find information in the book.  The index of the book only works when there are numbered pages that the index can refer to.  Computers also use an addressing or position notation system. Every little spot on a computer's hard drive has an address or positional coordinate that can be used internally by the computer. 

Conversely, brains have no addressing system, no position notation system, and no coordinate system. Neurons don't have neuron numbers or neuron coordinates, and synapses don't have synapse numbers or synapse coordinates. 

Characteristic #2: Indexing 

Indexing is some setup that allows a fast retrieval of information using addresses or coordinates or numbered positions. An index typically uses a sorted list. For example the index of a book contains a sorted list of topics in the book, with a page number or page numbers next to each of the topics. Computers also use indexing, and online services such as Internet search engines make very heavy use of databases that rely heavily on indexes. 

Conversely, there are no indexes in the brain. 

Characteristic #3: Sorting

Sorting is used in indexes, but sorting can be used by itself to allow fast retrieval of information. When I was a boy long before the Internet was invented, a key resource I used was a multi-volume encyclopedia set such as the Encyclopedia Brittanica.  The set consisted of many volumes, with the first volume covering topics beginning with A, and the last volume covering topics beginning with Z. Each volume was alphabetically sorted. So, for example, in the A volume the article on aardvarks came near the beginning, and the article on the Aztecs came near the end. With such a sorted arrangement of topics, it was easy to quickly find information on almost any topic.  Computers also make use of sorting to allow quick retrieval of information. 

Conversely, there is no sorting going on in the brain. Neurons and synapses have fixed positions in the brain. There is no way for a brain to sort its neurons or synapses, and no sign that any brain components are sorted. 

Characteristic #4: A Nondestructive Position Focus Mechanism

A position focus mechanism is some mechanism allowing information to be read from some position that is the current reading position that can be changed. The setup of a book (with a binding and many pages) allows a nondestructive position focus. You simply open the book to one of its pages, and that is the current reading position. Computers with hard drives also use a position focus system.  They have a read/write head that can be moved to a spot on a spinning disk. That spot is the current reading position. A good position focus mechanism is one that is nondestructive, allowing you to move to any reading position without damaging information in the system. 

We can imagine information storage systems that would fail to have a nondestructive position focus mechanism. You could write information on all the dried leaves in your backyard. But then if you tried to read from some particular position, you would have to step on many leaves, and mess up the information in them. Or, you could write very much information using your fingers, on the wet sand of a beach. But if you were to try and read from a particular spot, you would walk over the lines, and destroy some of the information. Similarly, it would not work to store information by putting pages in a big bag. There would be no position focus mechanism allowing a fast retrieval. 

Brains have no known position focus mechanism. There is nothing like a neural cursor that travels from one position in the brain to another, implementing something like a current reading position. There is nothing in the brain like a read/write head of a computer. Eyes have a physical mechanism allowing them to focus on one particular spot, but there is no sign in the brain of any mechanism allowing a physical focus to occur on one small set of neurons, something like a position focus mechanism. 

Characteristic #5: Hierarchical Organization

In something like a printed encyclopedia set, information is stored using a hierarchical organization. The organization goes like this: pixels are organized into characters, which are organized into words, which are organized into sentences, which are organized into paragraphs, which are organized into topic articles, which are organized into volumes. Such an organization facilitates the fast retrieval of information. 

Something rather similar goes on in computers. Computers use a folder system or directory system that can be hierarchically organized.  So, for example, in the screen shot below we see a file called stdole.dll that is in the subdirectory within an stdole directory that is within a GAC directory that is within an assembly directory that is within a Windows directory. When there are very many files on a computer, it is much faster to find files with such a hierarchical organization than if all of the files were in the same directory or folder. 

directory structure

Conversely, there is no sign that brains store information using hierarchical organization. We can see no signs of a hierarchical organization of neurons or a hierarchical organization of synapses. 

Characteristic #6: Places for Permanently Storing the Fast-Retrieved Information

Fast-retrieval systems such as books and computers have places for permanently storing information. The printed pages of a book will store information for more than a century. A computer will store information for many years, even if the power is turned off. 

Conversely the brain seems to have no place suitable for the storage of fast-retrieved information. The main theory of memory storage in the brain is that memories are stored in synapses. But synapses are  physically unstable. On a molecular level, the proteins that make up synapses are short-lived, having average lifetimes of two weeks or shorter.  

On a larger structural level, synapses are unstable. Synapses are so small that it's almost impossible to track the lifetime of an individual synapse. But we know that synapses are attached to larger units called dendritic spines, rather like sewer lines or electrical lines are attached to a house.  Dendritic spines are large enough to be observed with high-powered microscopes. Such observations tell us that dendritic spines are pretty short-lived. 


Dendritic spines last no more than a few months in the hippocampus, and less than two years in the cortex. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the hippocampus have a turnover of about 40% each 4 days. This study found that dendritic spines in the cortex of mice brains have a half-life of only 120 days. The wikipedia article on dendritic spines says, "Spine number is very variable and spines come and go; in a matter of hours, 10-20% of spines can spontaneously appear or disappear on the pyramidal cells of the cerebral cortex." Referring to in vivo observations of dendritic spines in the mouse hippocampus, the paper here says the authors "measured a spine turnover of ~40% within 4 days."  The 2017 paper here ("Long-term in vivo imaging of experience-dependent synaptic plasticity in adult cortex") found the following regarding dendritic spines in the cortex of rodents:

"About 80% of synapses were detectable for a day or longer; about 60% belonged to the stable pool imaged for at least 8 days. Even this stable pool was found to turn over, with only, 50% of spines surviving for 30 days or longer. Assuming stochastic behaviour, we estimate that the mean lifetime of the stable pool would be on the order of 120 days."

Because dendritic spines don't last for five years, we should conclude that synapses (typically attached to dendritic spines) don't last for five years. But humans can accurately remember things for 50 years or more. 

Characteristic #7: Use of a General-Purpose Encoding System

Books and computers both use a general-purpose encoding system, capable of storing an almost infinite variety of information. In books the encoding system is the alphabet of a particular language. In computers the encoding system involves multiple protocols such as the ASCII protocol by which English characters are represented as decimal numbers, and a decimal-to-binary protocol by which decimal numbers can be represented as binary numbers such as 0000000000001110.

No one has ever discovered any type of general-purpose encoding system in the brain. The only known encoding system in the brain is the genetic code, a very limited type of encoding system under which certain triple combinations of nucleotide base pairs stand for particular amino acids. The same system is used in all parts of the body, including your feet. No one has ever discovered any type of encoding system by which something such as English text could be represented as stored information found in neural states or synapse states. No one has ever found a single English word or a single image stored in a brain after examining brain tissue with a microscope.

A scientific paper notes the lack of any encoded information permanently stored in synapses:

"Synapses are signal conductors, not symbols. They do not stand for anything. They convey information bearing signals between neurons, but they do not themselves convey information forward in time, as does, for example, a gene or a register in computer memory. No specifiable fact about the animal’s experience can be read off from the synapses that have been altered by that experience.”

Conclusion

The human brain bears no physical resemblance to a device for the fast retrieval of information, and has none of the main characteristics of systems that are devices for the fast retrieval of information. But we know that humans can retrieve information at instantaneous speeds. This is routinely shown on the show Jeopardy, where contestants retrieve information and speak an answer (stated in question form) pretty much the instant that the host finishes reading a question (stated in answer form).  Any performer singing a Gilbert and Sullivan patter song will be retrieving information at a speed of roughly three words per second, and I can mentally recall some of their songs at a rate of five words per second. 


Given the complete lack of any coordinate system or addressing system in the brain by which the exact locations of neurons can be specified, the brain can be compared to these things:
(1) the US phone system if no one's phone number had ever been published;
(2) a vast post office with countless post office boxes, none of them numbered;
(3) a city in which none of the streets were named, none of the buildings had an outside identifier, none of the apartments had apartment numbers, and none of the houses had street numbers.
(4) a vast library in which none of the books have titles on their covers, and none of the chapters have chapter titles.

Imagine how hard it would be in any of such things to navigate to a precise location -- a particular post office box, a particular phone, a particular chapter of a particular book, or a particular apartment. That's the kind of situation that should exist in a brain storing abundant memories, because there is no coordinate system in a brain, and neurons don't have neuron numbers or something like a brain longitude and latitude. Instantaneous recall of rarely recalled memories and rarely recalled facts should be impossible if our memories are stored in brains. The fact that we routinely perform such instantaneous recalls is strong evidence our memories are not mainly stored in brains.

The complete lack of any workable theory for how memory recall can occur so quickly is admitted by neuroscientist David Eagleman, who states:

"Memory retrieval is even more mysterious than storage. When I ask if you know Alex Ritchie, the answer is immediately obvious to you, and there is no good theory to explain how memory retrieval can happen so quickly."

I haven't even mentioned here very severe signal transmission slowness factors and cumulative synaptic delay factors (discussed here) which make an additional very strong reason for thinking that brains must be way, way too slow to account for instant memory recall.  We don't think and recall at the speed of brains; we think and recall at the speed of souls. 

Tuesday, May 17, 2022

Floundering Failure of Those Urging Neural Solutions to Mental Illness

About a week ago the Los Angeles Times had an interview with mental health expert Andrew Scull, one in which Scull calls attention to the huge failure of those trying to treat mental illness mainly through neural solutions such as alterations of brain chemistry.  Scull claims that the director of the National Institute of Mental Health from 2002 to 2015 produced "uniformly dismal results." He cites a Wired interview in which that person (Tom Insel) made this confession: "I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs---I think $20 billion---I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness.” 

In that interview we read about how Insel is interested in monitoring people's online speech, to try to pick up signs of mental illness. We read this:

"One of the first tests of the concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania. 'The complication is developing the behavioral features that are actionable and informative,' Insel says. 'Looking at speed, looking at latency or keystrokes, looking at error---all of those kinds of things could prove to be interesting.' "

So will Big Brother soon be monitoring your online speech, ready to report you to some mental-health monitoring authority if you are writing something that violates fluctuating norms of correct speech? That sounds very Orwellian. 

In the LA Times interview, Scull blasts the mental health approach based on genetics and neural chemistry. Scull states the following: 

"People with serious mental illness live, on average, 15 to 25 years less than the rest of us, and that gap seems to be widening, not narrowing. While genetics and neuroscience have flourished within the confines of universities, their therapeutic payoff has been minimal or nonexistent.  I’m a sociologist, so you may think I’m biased. Perhaps I am, but in my judgment, Insel’s fixation on biology and biology alone has been a profound error. It threatens to undermine the prospects for progress in the mental health arena.  Unfortunately, it is the same approach that seems to dominate the thinking and priorities of his successor at NIMH [the National Institute of Mental Health], Joshua Gordon. Gordon is a neuroscientist whose own work, focused on neural activity in mice, and his appointment indicates that the federal research enterprise will double down on neuroscience and genetics."

Scull describes some of the mistreatment of the mentally ill that occurred in the twentieth century by people convinced that mental illnesses were almost entirely matters of genetics and the brain:

"Compulsory sterilization; removal of teeth, tonsils and internal organs to eliminate the infections that were allegedly poisoning their brains; inducing life-threatening comas with injections of insulin; subjecting them to multiple episodes of electroshock treatments day after day till they were dazed, incontinent, and unable to walk or feed themselves; damaging the frontal lobes of the brain, either with an instrument resembling a butter-knife or by using a hammer to insert an icepick through the eye socket and sever brain tissue: these were unambiguously, horrendous interventions."

Great progress may be made in treating the mentally ill when we stop thinking of human beings as brains and bags of genes, and start thinking of human beings as souls and products of society, who can be helped mainly with social, educational, psychological, charitable and spiritual aid. 

Tuesday, May 10, 2022

Saying Consciousness Is a Wave Function Collapse Is Like Saying Your Mind Is a Square Root

Anesthesiologist Stuart Hameroff has recently written an article with the title "Consciousness Is the Collapse of the Wave Function."  The article is a rambling mishmash of physics, chemistry and neuroscience that completely fails to provide anything resembling a credible notion for how a brain could produce awareness. 

Hameroff discusses a theory advanced about 20 years ago by him  and physicist Roger Penrose, what is called the Orchestrated Objective Reduction theory. The theory has been basically ignored by the scientific world, and it rather seems that almost no one seems to believe in it other than Penrose and Hameroff. The theory claims that consciousness is produced by some tiny units called microtubules. 

Microtubules are units inside neurons, and each neuron has many of them. The function of microtubules is known: they serve to provide structural support for a neuron, and also help transport chemicals. Claiming that they also provide consciousness rather reminds me of that classic Saturday Night Live sketch in which someone claims that his floor wax is also a dessert topping.   

Hameroff makes the untrue claim that something has taken place to lend credibility to this theory. He states this:

"Penrose suggested that wavefunctions collapse spontaneously and in the process give rise to consciousness. Despite the strangeness of this hypothesis, recent experimental results suggest that such a process takes place within microtubules in the brain."

What are these recent experimental results? The article does not tell us. The article has no reference or link to any scientific paper. It merely has a link to a Wikipedia.org article on something called superradiance, an article saying nothing about the mind or consciousness.  Hameroff's claim that experimental results support his theory is untrue, and his failure to cite or link to such results suggests his statement is groundless.  

The article is a classic example of using what I call the Mixture Method, a method I describe in my post "The Mixture Method Works Wonders When Selling Speculation as Science." The method consists of mixing up speculations with either scientific facts or mathematics or a combination of the two, usually in a way so that the speculative parts are a relatively small part of the paper or article. The goal is to kind of give a scientific flavor or a scientific sound to some claim that is speculative. Often the scientific facts cited are irrelevant to the speculations made, or have only a tangential relation to them.  

speculation in science

A long section of Hameroff's article suggests that he is mainly just spouting irrelevant facts to give some scientific sound to his speculation. Beginning with his sentence "light is the part of the electromagnetic spectrum that can be seen by the eyes of humans and animals – visible light," and ending with his probably incorrect sentence "These micelles somehow developed into functional cells, and then multi-cellular organisms, long before genes," we have more than 500 words mentioning things that are all irrelevant to the claim made in his title, and all irrelevant to the issue of how people have consciousness. The discussion is a very jumbled hodgepodge of scientific facts and irrelevant observations, bouncing all over the place, going from the early universe to facts of chemistry to speculations about the origin of life to mention of mystical experiences. Irrelevant scientific facts are being cited at great length, to help give some scientific sound to some speculation that is metaphysical. 

When you're following the mixture method, it helps if one of the scientific topics mentioned is an extremely obscure topic. That way you can mention some deep-sounding scientific topic, and people will probably fail to notice how such a mention is irrelevant to your speculation. The deep scientific topic mentioned is the collapse of the wave function.  The wave function is some mathematical concept that comes up in the abstruse field of quantum mechanics.  Supposedly a wave function "collapses" when a measurement is made of a particle such as a photon or an electron. 

That has nothing to do with consciousness, and nothing to do with minds.  When people try to drag the collapse of the wave function into a discussion of the origin of consciousness, what goes on seems to go like this:

(1) The hi-tech type of physicist measurement occurring when wave functions collapse is stretched a hundred-fold, into the more general idea of "observation."

(2) "Observation" is then conflated with "consciousness." But observation is not consciousness. Security cameras observe things, and they are not conscious. And unconscious automated equipment can measure things. 

Using the term "it is becoming apparent" for something for which there is zero evidence, Hameroff states this: "It is becoming apparent that consciousness may occur in single brain neurons extending upward into networks of neurons, but also downward and deeper, to terahertz quantum optical processes, e.g. 'superradiance' in microtubules, and further still to fundamental spacetime geometry (Figure 1)." The figure 1 is a diagram that very strangely shows us  columns consisting of (1) a neuron; (2) microtubules in a neuron; (3) hypothetical bumps in spacetime.  What is this superradicance being talked about? Penrose refers us to a wikipedia.org article on that topic, but the article makes no mention of biology or the brain. It merely mentions high-energy physics sources of superradiance such as hot gases, and astrophysical sources of superradiance.  

The paper "Superradiance -- The 2020 Edition" is a 261-page physics paper on the topic of superradiance. The paper makes no mention of the brain, no mention of neurons, and no mention of cells, but it does talk a lot about black holes.  It seems Hameroff has no business mentioning superradiance to try to support the claim that consciousness is produced by microtubules. 

Here's a reality check: it's dark as the dark side of Pluto inside brains and inside microtubules.  A scientific article gives us the truth about radiance from cells:

"Cifra and colleagues cultured millions of yeast cells in a light-tight chamber. The signal detected using photomultiplier tubes tends to be extremely weak: A photon emitted every 15 minutes per cell..Cifra is cautious about concluding whether these ultraweak emissions—he prefers the term 'biological auto-luminescence'—play a significant role in biological signaling, or if they are simply by-products...From a theory standpoint, Cifra says, the signals are simply too low to be used for communication."

Even if superradiance were to be occurring in microtubules, it would do nothing to show that microtubules or brains produce consciousness. Radiance and superradiance are physics phenomena, not mind phenomena.  While people refer to "light of the mind" or "the light of consciousness," or a "mental illumination," they are merely being metaphorical.  Light is no more consciousness than heat is consciousness. 

Hameroff states, "I agree that consciousness is fundamental, and concur with Roger Penrose that it involves self-collapse of the quantum wavefunction, a rippling in the fine scale structure of the universe." It seems that Hameroff is bouncing around between three different ideas:

(1) the idea that consciousness is produced by microtubules;

(2) the idea consciousness is the collapse of a wave function;

(3) The idea that consciousness is produced by a rippling in the fine scale structure of the universe. 

These are three different ideas, all groundless.  Which one does Hameroff believe in? Apparently all three, as if he can't make up his mind. It sounds like Hameroff can't get his story straight.  

According to mainstream understanding of quantum mechanics, a wave function collapse occurs only upon observation or measurement. There are no observations or measurements occurring within microtubules.  So in the context of mainstream quantum mechanics, talking about wave function collapses within microtubules doesn't make sense.  Of course, you can always play around with unorthodox theories of quantum mechanics, but doing that has produced some of the silliest statements of modern scientists, such as Hugh Everett's bat-bleep-crazy theory of some infinity of parallel universes. 

Penrose and Hameroff found it necessary to become quantum theory heretics, by postulating the speculative idea that wave functions spontaneously collapse (instead of only occurring during measurement or observation).  Maybe their thinking was rather like this: wave function collapses as we now understand them occur only with conscious observers, so if wave functions spontaneously collapse, they would produce conscious observers.  No, they would not. If such spontaneous wave function collapses occurred, it would merely mean we need to revise the current prevailing idea that wave function collapses only occur with observation.  Similarly, the fact that car crashes only occur with drivers gives you not the slightest warrant for thinking that spontaneous car crashes not involving observers (like two unmanned parking lot cars colliding) would somehow conjure up the sudden appearance of car drivers.   

Thinkers such as Hameroff try to suggest that there can be quantum effects in the brain, and that the brain can act like a quantum computer.  Such insinuations are futile in explaining human awareness.  Computers can compute, but they are not aware, and have no consciouness. You can't compute your way to consciousness. Also, it is completely erroneous to think that you can show a brain could compute by showing some brain tissue acts like computer hardware. Computing inside a computer requires not just hardware but also computer sofware. Brains have nothing like the software in computers that enables computation. 

I can give some advice for people trying to make some progress in understanding the human mind:

(1) Don't just study microtubules, but study the entire brain, studying at great length exceptional case histories of minds that performed well even with very little brain, and studying at great length the topic of neural shortfalls, all the ways in which the brain fails to have the physical characteristics we would expect it have if it were the source of our minds and the storage place of our memories. You can get such information by reading the posts at this blog. 

(2) Don't study quantum mechanics or high-energy physics when trying to clarify the human mind, but do make a very thorough study of human mental phenomena, including a long and thorough study of the evidence for psi and observational reports of seemingly paranormal and currently inexplicable human experiences such as near-death experiences, hypnotic phenomena and out-of-body experiences.  

(3) Do not become a fan of any theory that takes the futile, dead-end approach of merely trying to explain "consciousness" (a bloodless,  stripped-down term suitable for describing the mind of an ant), and recognize the necessity for explaining the full range of human mental phenomena, including memory, thinking, personality, belief, understanding, self-hood, creativity, and the many well-observed and carefully documented anomalous mental phenomena that our professors should be paying very close attention to but senselessly ignore. 

complexity of minds
A mind is so much more than just "consciousness"

Wednesday, May 4, 2022

EEG Studies Fail to Provide Robust Evidence That Brains Think or Retrieve Memories

To try to provide evidence for their claims that memories are stored in the brain and that brains produce mental phenomena such as thinking and imagination, neuroscientists look for what they call neural correlates of mental activity. A neural correlate of a mental activity would be some sign that a brain acts differently or looks differently when someone does a particular type of mental activity such as thinking or recalling. 

The most common way that a neuroscientist will look for a neural correlate of a mental activity is to image someone's brain while he is doing some mental activity, typically using an fMRI scanner.  In my post "The Brain Shows No Sign of Working Harder During Thinking or Recall," I discussed the failure of such studies to provide robust evidence that brains produce thinking or recall.  The following are tips for analyzing such studies:

(1) Search for the phrase "percent signal change" to quickly find out how much of a difference was found during some mental activity. A large fraction of all fMRI-based neural correlate studies will use such a phrase. 

(2) Find out the sample size used, and whether a sample size calculation was used to determine whether the sample size was adequate. The vast majority of fMRI-based neural correlate studies fail to provide a sample size calculation, and the vast majority of such studies use way-too-small study group sizes, so small that they are not reliable evidence for anything. 

What you will typically find is that such studies will show only extremely small changes in brain activity, involving changes of smaller than one half of one percent.  Such variations of only about  one part in 200 or smaller are no robust evidence that brains produce thinking or produce recall. We would expect to get variations of such a size given random moment-to-moment fluctuations in brains, variations that would occur even if a person was not thinking and not recalling anything. And the fact that the vast majority of fMRI-based neural correlate studies use way-too-small study group sizes means that such studies are not robust evidence of anything.  As discussed here, a recent large study was announced with a headline of "Brain studies show thousands of participants are needed for accurate results," but a typical fMRI-based neural correlate study will not even use dozens of participants. 

But there is an entirely different way in which neuroscientists can look for neural correlates of memory recall and thinking. Rather than using big fMRI machines to scan the brain, a neuroscientist can hook up brains to electroencephalography machines (EEG machines) that read electrical activity of the brain.  To produce such readings, many different electrodes will be attached to the heads of subjects who are being tested. The output is not an image of the brain, but a reading showing lines that go up and down.  A neuroscientist can study such lines, looking for some neural correlate of thinking or recall that shows up as a difference in a wiggly line. 

Scientists studying such EEG outputs are looking for what they call an event-related potential or ERP.  In theory an ERP is some EEG pattern that might be repeated whenever some mental event occurs such as recognition or recall or concentration. In the literature an ERP is typically described as some blip occurring over less than a second. Figure 5 of the paper here gives us a "heat map" of various claimed ERP effects relating to cognition. The claimed effects have various names listed on the right side of the heat-map, names such as N400 and CDA (standing for contralateral delay activity). 

What typically goes on is a cherry-picking affair.  A neuroscientist will typically use a type of EEG device with 128 electrodes, each of which is attached to a different part of the head.  After the device records neural activity,  there will be 128 different readings, each from a different part of the head.  Each reading will be some long wavy line.  Imagine a paper scroll about three inches high and 100 meters long, with a wavy line stretching from beginning to end, and you'll have a rough idea of the output from any electrode.  Neuroscientists will not typically show us some graph showing the statistical average of all of these lines. Instead, they will be free to choose any group of electrodes, to try to show some correlation effect.  

Imagine you are a neuroscientist. Did you fail to get any correlation effect from averaging the outputs from electrodes 98, 99, 100 and 101? Then you can just keep playing around with electrode combinations until you get something that looks like an effect. For example, maybe you'll get something that looks like an effect if you average the results of electrodes 34, 35, 37 and 38.  If the studies were properly designed, using a pre-registration in which an exact methods description was published before data gathering, such dubious "slice and dice until you get a desired result" techniques would not be possible. But we almost never see any such pre-registration in these EEG neural correlate studies. Also, there's no rule that you cannot cherry-pick two or three electrodes that were not adjacent. 

So, for example, in the study here in Figure 3 we have a diagram showing two graphs of nice-looking ERP effects. The caption tells us the first graph is from electrode 65, and that the second graph is from electrode 91. But Figure 2 shows that 128 electrodes were used, and that electrode 65 is on the other side of the head, nowhere close to electrode 91.  Our authors have apparently cherry-picked the results from 128 electrodes, looking for the results that would best show the desired effect.  

A scientific paper about the shortfalls of studies looking for these ERP effects tells us the following:

"An example of this issue is described in a recent paper by Luck and Gaspelin (2017), who demonstrated how 'researcher degrees of freedom' could influence statistical analysis of ERP data. ERP recordings typically employ dozens of electrodes and result in hundreds of time points, which results in an almost unlimited variety of possible data analysis approaches, and, consequently, in the probability of a false significant finding approaching certainty."

Many such studies have been done, but they have failed to produce any robust evidence that human brains produce memory recall or thinking. Let us look at some of these studies, and the results that have been claimed. I will use the "heat map" in Figure 5 of the paper here to select the best-reported claimed ERP effects for cognitive activity. According to that figure, the best-reported ERP effects relating to cognition are:

(1) A CDA or contralateral delay effect having something to do with memory;

(2) an FN400/N400 effect (also called an "ERP old/new effect) having something to do with recognition;

(3) an N170 effect having something to do with categorization;

(4) a P100 effect  (also called a P1 effect) having something to do with attention.

It is claimed that an "ERP old/new effect" (apparently the same or similar to an FN400/N400 effect) is some EEG sign of recognition.  Looking at the  papers attempting to show this effect, we see nothing that looks very impressive. The claim is that when you have people look at some list of words that includes words they were asked to memorize and words they were not asked to memorize,  that for only about a fifth of a second some type of brain wave looks slightly different when that wave is read from the parietal region of the brain. 

No robust evidence has been provided for such an effect, because the study group sizes used in the studies claiming such an effect are too small. Even if such a fraction of a second effect was observed, it can be explained without assuming that a memory has been retrieved from the brain.  When somebody recognizes something, there can be a kind of "aha" effect in which muscular responses differ very slightly.  For example, after recognizing a face in the crowd, a person's facial expressions can be different than when encountering a stranger, with the difference lasting only an instant. Such a difference could easily be the explanation for some marginal fraction-of-a-second difference showing up in a reading of brain waves. 

In one paper I read claiming to get this fraction of a second "ERP old/new effect," the instructions were for subjects to click an "Old" button if they recognized a word, and a "New" button if they did not. The instructions stated that the "Old" button should only be clicked if the subject was sure he had seen the word before. With such instructions, there easily could be a kind of momentary pausing effect when people thought they recognized a word, during which they were wondering whether they were sure about seeing the word before.  Such a muscular pausing could be the cause of this fleeting "ERP old/new effect," with the effect having nothing to do with a difference in brain activity during recognition. 

This "ERP old/new effect is apparently the same (or involves or is related to) something called the N400 response. A paper described it like this:

"The N400 is a negative-going wave peaking at about 400 ms, whose amplitude is larger after presentation of a stimulus whose probability of occurrence is low within its semantic context (Kutas & Federmeier, 2011). For example 'He spread the warm bread with socks' would elicit a larger N400 than 'He spread the warm bread with butter” (Kutas & Hillyard, 1980).' "

This is another alleged neural correlate of cognitive activity that can easily be explained purely by muscle activity having nothing to do with the mind. The person presented with some crazy sentence may have a different muscle response, perhaps a look of bemusement on his face, or a kind of "huh?" look on his face.  Since the reported N400 response only involves a fraction of a second difference, we can't tell whether evidence is being picked up of brains thinking, or merely evidence of a tiny-bit different muscle response. 

A meta-analysis of studies about this claimed N400 response tells us that the average number of subjects used is only about 15.  Is such a sample size large enough? It is not, judging from the paper here. That paper is devoted to estimating how large a study group size would be needed to detect a particular ERP effect, one similar to the claimed N400 response and the claimed "ERP old/new effect." The paper tells us that to get a fairly good 80% statistical power would require at least "30– 50 clean trials with a sample of 25 subjects." 

There's another claimed ERP effect called the contralateral delay effect or CDA. The effect is claimed to occur as a fraction-of-a-second blip when people are shown screens having colored circles  or colored squared, and asked to identify whether a later screen matches the previous screen. Figure 1 of the paper here shows the type of screens shown.  The visual below shows the kind of screens shown, and how long the inputs were shown.

After taking EEG recording of brain waves of people during such an activity, scientists have claimed that there is some distinctive blip that shows up (lasting only a fraction of a second), something they call a contralateral delay effect or CDA. It has been claimed that such an effect is a correlate of working memory.  But since the alleged effect is extremely short-lived, it provides no evidence that brains store memories. What is showing up could simply be related to vision or to some color persistence effect by which a perceived color will hang around in the mind or brain for a second or two. 

It is well-known that there is something called an "afterimage," in which you can see something after you stopped looking at it.  For example, the web page here has a photo of Amy Whitehouse that is strangely colored. Look at the dot at the center of the photo for 30 seconds, and then look to the blank white area to the right of the photo. You will then see a ghostly afterimage of Amy Whitehouse. Whatever that type of effect is, it isn't memory.  It's just a "lingering of perception" thing.  The claimed CDA effect may merely be picking up that type of short-term thing, not something related to a brain storage or retrieval of memories. 

The N170 effect is some ERP effect supposedly produced when someone is shown a picture of a face. Referring to a mere fraction of a second, the wikipedia.org article on the effect claims that this alleged effects only lasts "130-200 msec after stimulus presentation." Figure 1 of the paper here has a diagram similar to the schematic diagram below, with the black line representing the response from seeing faces, and the gray line representing the response from seeing objects that are not faces.


This meta-analysis tells us that most of the faces used in studies of the N170 effect have involved emotional faces. The faces shown usually had expressions such as fear, disgust or joy. You can easily explain the fraction-of-a-second blip shown without imagining that viewing faces involves some recognition activity by the brain, and that all that is being picked up is a slight physiological response in regard to emotional stimulus. Studies of the N170 effect do not rule out some scenario like this:

(1) You see a face with an emotional expression, and your mind or soul (not your brain) recognizes the emotion. 

(2) Seeing emotion on someone's face produces a slight physiological response, which shows up as a fleeting blip in brain waves. 

The P100 effect (also called a P1 effect) is also some claimed small-fraction-of-a-second effect supposedly occurring for about 50 milliseconds when a person engages in visual selective attention, such as looking at only the left part of a screen. Eye muscles behave differently when you focus on only one side of a screen. Since such an instantaneous effect can easily be explained in terms of muscle activity involving the eyes, it provides no good evidence that brains are producing mental attention.  

Nothing we have discussed provides any good evidence that brains produce thinking, that brains store memories, or that brains retrieve memories. What kind of test can we imagine that would be a good test of such claims? The test might go something like this:

(1) Subjects wearing EEG electrodes on their head would be asked to look at photos displayed on a computer screen, with each photo shown for five seconds.  Most of the photos would be photos of people who were not famous and could not be recognized. One third of the photos would be photos of famous people with neutral expressions, none of whom were scary or threatening.  A computer program would assure a random shuffling of the photos. 

(2) Subjects would be asked to remain motionless and expressionless. Subjects would be told to simply say in their mind (without speech)  "Go" if they recognized the face, and "No" if they did not. 

(3) Attempts would be made from reading brain waves to determine whether there was any correlation between the perception of recognized faces and the perception of faces that were not recognized. 

Such a test would fail. No robust evidence would be found for a neural correlate of recognition. 

I used the "heat map" in Figure 5 of the paper here to select the best-reported claimed ERP effects for cognitive activity. It is interesting what is not reported in that heat map. According to the map it seems:

(1) There are no strong ERP/EEG effects for learning. 

(2) There are no strong ERP/EEG effects for decision making. 

(3) There are no strong ERP/EEG effects for prediction. 

(4) There are no strong ERP/EEG effects for executive function. 

(5) There are no strong ERP/EEG effects for perception.

(5) There are no strong ERP/EEG effects for speech.

Overall, EEG studies fail to provide robust evidence that thinking or decisions or memory retrieval or memory storage occurs because of the brain. The shape-seeking scientists eagerly looking for these slight, fleeting blip effects in EEG lines can be compared to people eagerly scanning the clouds looking for shapes that resemble animals, to back up some belief that the ghosts of dead animals live in the sky. 

The sample sizes used in these EEG/ERP studies are generally way too small to provide a robust evidence for a real effect. The headline of a news release of an important recent study is "Brain studies show thousands of participants are needed for accurate results." But these EEG/ERP studies typically involve only about 15 subjects per experiment.  A huge defect calling into question the reliability of all such studies is that the researcher is free to scan the results from 120 electrodes, and cherry-pick the output from whatever few electrodes he finds most shows some sub-second effect that is being eagerly sought, doing additional cherry picking that involves looking for some one-second slice of time in which the effect will show the most. This is a recipe for "conjuring phantoms." Given such complete freedom to scan data looking for some fleeting blip in wavy lines, it is easy to find almost any imaginary effect you might be hoping to find. In general, the fleeting ERP blips that are found can be explained as brain involvement in muscle activity and physiological activity, without postulating that brains are the source of thinking and memory.