Monday, March 31, 2025

Anomaly Aversion Greatly Hurts Mind Cause Analysis and Medicine Side Effect Analysis

"The past three decades have shown that psychiatry’s medical vision is neither scientifically credible nor morally sound." -- Justin Garson, professor of philosophy at Hunter College (link) 

The Mad in America site (www.madinamerica.com) is a good site for getting contrarian opinion and analysis on the topic of psychiatric treatment.  On the day I am writing this post I will schedule for later publication, there is an interesting article on the site, one entitled "The Iatrogenic Gaze: How We Forgot That Psychiatry Could Be Harmful." The article by Alex Conway is a troubling portrait of psychiatrists who fail to do their job properly because they are doing too much avoiding, downplaying and ignoring reports of anomalies. The anomalies are side effects reported by people using the medicines prescribed by the psychiatrists, reports of side effects that are not on official lists of possible side effects of a drug. 

Normally when a drug is approved by a regulatory group such as the FDA, there is published an official list of common side effects, along with a list of rare side effects. But what happens when someone prescribed that drug reports a side effect not on such a list? The authority receiving such a report may treat such a report as an anomaly to be shunned, ignored or explained away.  In the article we read this chilling report of someone subjected to the most harmful gaslighting after she reported a side effect of a medicine:

" The doctor diagnosed her with a delusional disorder. Her medical records state: 'Firm fixed delusional belief about medications causing side effects… Impression: delusional disorder with significant health anxiety and OCD with comorbid depressive symptoms.'  Rosie was then committed against her will to a psychiatric facility and forced to take medication for her 'delusions'. Four years later, Rosie still suffers the same symptoms."

We read this:

"Drug Injuries are a piece of a wider problem known as iatrogenesis (doctor-made illness). Although drug injuries are virtually absent from public discourse, available research considers them a leading health risk. A 2018 paper examined FDA data on adverse drug events (ADEs) and found a rate of over 100,000 serious injuries per year. That rate has been increasing rapidly: 'From 2006 to 2014, the number of serious ADEs reported to the FDA increased 2-fold… A previously published study… found that from 1998 to 2005, there was a 2.6-fold increase in the reports of serious ADEs and a 2.7-fold increase in the reports of fatal ADEs'. It seems that the rate of serious drug injury is doubling per decade."

The article is very long-winded, and it is hard to quote a few choice lines or a paragraph to give its main idea. I can try to summarize it like this:

(1)  Psychiatrists are taught a kind of dogma about effects of the drugs they prescribe, including the idea that they only have a certain number of things called "side effects." 

(2) Patients often report what they believe are bad effects from taking such drugs, describing negative effects that are not on such a list of "side effects." 

(3) Psychiatrists receiving such reports go into "explain it away" mode, which may consist of various techniques such as assuring the patient he is mistaken, or in the worst case, recording the patient as having a delusion or some type of mental disorder because he made a report contrary to the dogma the psychiatrist was taught. 

After reading the article, I thought to myself: "That rather rings a bell." The kind of dogmatism, gaslighting and anomaly aversion reported reminded me of what goes on when mainstream scientists receive reports of the paranormal, such as reports of apparitions, near-death experiences, telepathy, clairvoyance, and so forth.  

I can describe a type of anomaly aversion that occurs with neuroscientists and other types of scientists:

(1)  Scientists are taught a kind of dogma about what mental powers or mental experiences are possible, dogma that is based on the idea that all mental experiences come from the brain and the senses.  

(2) Humans often report experiences conflicting with such dogma, describing events such as apparition sightings, near-death experiences, out-of-body experiences, ESP, clairvoyance, and other phenomena that cannot be explained by brain explanations. 

(3) Scientists receiving such reports go into "explain it away" mode, which may consist of various techniques such as telling the person making the report that he is mistaken, or in worse cases, trying to  insinuate that the witness was hallucinating or lying. 

Below is a comparison of similar behavior in two types of professionals:

anomaly aversion

Anomaly aversion and ignoring reported anomalies greatly hurts the proper study of medicine side effects. Anomaly aversion and ignoring reported anomalies also greatly hurts the proper consideration of what causes the human mind. All reports of human experiences that cannot be explained by the "brains make minds" dogma should be studied with great care. Such reports should be collected, preserved, classified, pondered and analyzed.  Ignoring such reports is the opposite of good scientific practice. 

The AI art visual below depicts scientific censorship in a visually memorable way:


When mainstream scientists repress observations of the paranormal, they don't act in such a visually noticeable way. Instead, they act in just as repressive and suppressive a way, but in ways that are not visually noticeable:
  • They refuse to include in their textbooks and papers a discussion of important observations of the anomalous and inexplicable.
  • They refuse to include in their class lessons a discussion of important observations of the anomalous and inexplicable.
  • When acting as peer reviewers and editors, they act to suppress the publication of other scholars fairly reporting on observations of the paranormal. 
  • In the rare times they discuss the paranormal, they typically make inaccurate generalizations about the quality of evidence for some paranormal phenomena. 
  • They refuse to approve funding for the further investigation of types of paranormal phenomena that have been well-established as highly worthy of further investigation. 
  • In the rare times they discuss particular reports of the paranormal, mainstream scientists use gaslighting and unfair characterization to undermine witnesses of the paranormal, and use selective reporting and distortion to make very compelling evidence sound like it is weak evidence.
gaslighting

gaslighting
Examples of gaslighting

dumb professor and smart professor


dumb professor and smart professor

Thursday, March 27, 2025

Neuroscientists Offer Only the Most Hazy Hand-Waving When Trying to Explain Memory

"Scientific journals have reported cases of persons whose injuries have necessitated the removal of a large portion of the brain, and whose memory and power of thought were unimpaired by the loss of much cerebral matter, or by damage to centers which are supposed to be necessary to memory and consciousness. Dr. Troude writes:  'As M. Bergson foresaw in 1897, the hypothesis of the brain as conservative of memory images, must be renounced once and for all, and other ideas as to the nature of its role in the act of memory must be accepted. ' "

--- Helen C. Lambert, A General Survey of Psychical Phenomena, p. 49.

Dictionary.com defines "hand waving" as "insubstantial words, arguments, gestures, or actions used in an attempt to explain or persuade." When neuroscientists attempt to explain memory by referring to the brain, they offer only the most hazy hand-waving. Typically what occurs is the repetition of empty slogans and catchphrases.  

standard neuroscientist account

For example, a neuroscientist may claim that memories are formed by "synapse strengthening." There is no substance in this claim, which is mere hand-waving. We have many examples of the storage of knowledge in human-made things such as books, drawings, computer files, messages, handwritten notes and electronic data.  Such knowledge storage never occurs through strengthening.  Instead what typically happens when knowledge is stored in books, films, messages, notes and computer files is that there occurs a repetition of symbolic tokens by some kind of writing process, and the use of some encoding system in which certain combinations of symbolic tokens represent particular words, things or ideas.  That is not strengthening.  

To give another example of empty hazy hand-waving, a neuroscientist may vaguely claim that memories are formed by "the formation of synaptic patterns." There is no substance in this claim, which is mere hand-waving.  It is possible to store information by the use of pattern repetitions. For example, you might consider each word in the English language as a pixel pattern, and then say that each use of the word "dog" in a printed book is a pattern repetition. But synapses do not form any recognizable repeating patterns. And if synapses did form such patterns, there would need to exist some synapse pattern reader to read and recognize such patterns; but no such thing exists.  Instead of being anything that could consist of stable repeating patterns, synapses are unstable "shifting sands" kind of things. Synapses are built from proteins that have an average lifetime of only two weeks or less.  The maximum length of time that humans can remember things (more than 50 years) is 1000 times longer than the average lifetime of the proteins in a synapse. So synapses cannot be the storage place of memories that can last reliably for so long. 

Last year we had an example of the type of empty hazy hand-waving that occurs when neuroscientists attempt to explain memory. It was a paper entitled "Consciousness” as a Fusion of the Global Neuronal Network (GNW) Hypothesis and the Tripartite Mechanism of Memory." The paper made quite a few uses of the word "mechanism" but it described no memory mechanism at all. All we got was the most hazy hand-waving. 

I may note that biologists are notorious for abusing the term "mechanism," and often claim to be describing "mechanisms" when they are discussing no such things.  The term "mechanism" is only properly used for an exact description of material things working closely together in time and space. The pumping of blood by the heart and the circulation of blood in the body is an example of a mechanism. But when you are describing things that are not closely working together in a mechanical way, you have no business using the term "mechanism." It is a glaring abuse of language to refer to so-called "natural selection" as a mechanism of evolution, given that the main imagined things (random mutations scattered across vastly separated times and places) are not anything like a mechanism. 

We have in the paper a section entitled "Evolving Memory Mechanism" which fails to describe any mechanism, giving us only the most hazy hand-waving. Then we have a section entitled "Tripartite Mechanism of Emotive Memory" which states only this:

"For the neural emotive memory, we proposed that the cognitive unit of information (cuinfo) is realized materially (sic. chemically [13,28,29]. Thus, a chemically based code permits the achievement of an emotive state instigated by neurotransmitters (NTs) released by neurons/glial cells and the recall of such [1-11]. The proposed tripartite mechanism for encoding memory involves the interactions of neurons with their surrounding extracellular matrix (nECM/PNN). It has been experimentally verified that neurons are not 'naked', but are enshrouded in a web of glycoaminoglycans [44-49] which we propose serves as a 'memory material' [13,28,29]. Incoming perceptions are encoded with trace metals +neurotransmitters (NTs) to form metal-centered cognitive units of information (cuinfo). We have developed a chemographic notation for the tripartite mechanism which captures the essence of this regarding emotive memory (Figure 2)."

This is merely the most hazy hand-waving, decorated by some references to neural anatomy, along with the tiniest sprinkling of chemistry jargon. The Figure 2 referred to is utterly unimpressive. It merely shows a few circles that are labeled as "cations." We have a groundless reference to "addresses" that is not justified. The brain has no actual addresses anywhere in it. The lack of addresses in the brain is one of the major reasons why there can be no credible theory of instant recall occurring by brain activity. Addresses, sorting and indexes are some of the things used by devices human manufacture to allow them to instantly retrieve information. But brains have no addresses, no sorting and no indexes. References 13, 28 and 29 in the quote refers to papers here and here and here, which are not  papers referring to biological memory storage, but papers referring to some type of electronic memory storage in human-made devices. Our authors have vaguely referred to some type of chemical as being a "memory material," and have supplied only references to papers not referring to biology, giving us the incorrect impression that there is some biological foundation for their vague reference. 

We read this conceptually empty piece of hazy hand-waving:

"In a binary-formatted computer memory, the individual bits (or bytes) are stored in a matrix. But comprehensive memory results from the collective activity of a group of neurons, not only from the cuinfo of an individual neuron. Thus, a working model of how the brain generates emotive memory needs to meld physiologic effects with electro-biochemical processes. The GNW hypothesis suggests a 'brain cloud' that permits the neural net to consolidate the contribution of individual neurons in different anatomic compartments into comprehensive recall, effectively an integration of units of dispersed units of cognitive information [52]." 

This does not describe a mechanism, and does not contain any specifics. Next to this hand-waving paragraph, we are given a diagram that fails to depict any specific theory of memory.  The diagram is below:

neuroscientist hand-waving

There is no substance here. The gray circles merely represent regions of the brain. The little "c" squares represent memories that the authors claim are stored in the brain. This isn't a depiction of a memory, nor is it a depiction of any real theory of neural storage of memories. All that we have represented is the vague idea of a brain storing memories. 

In their Conclusion section, we have nothing that summarizes any actual theory of brain memory storage or brain memory recall or brain memory preservation.  Three times in the section the authors claim to have described a mechanism, but no such mechanism was described. All that has gone on is a claim of memory storage, and some references to different parts of the brain, sprinkled with the tiniest bit of chemistry jargon (by use of the word cation).  Nothing but the haziest hand-waving has gone on here. 

And so it is throughout neuroscience literature. You will never find a single paper that even attempts to give a precise description of how a brain could store the simplest bit of knowledge such as "my dog has fleas." Another example of the empty hand-waving of neuroscientists in regard to memory can be found in the paper here, entitled "Why not connectomics?" We have this example of conceptually empty hand-waving about memory storage:

"Brains can encode experiences and learned skills in a form that persists for decades or longer. The physical instantiation of such stable traces of activity is not known, but it seems likely to us that they are embodied in the same way intrinsic behaviors (such as reflexes) are: that is, in the specific pattern of connections between nerve cells. In this view, experience alters connections between nerve cells to record a memory for later recall. Both the sensory experience that lays down a memory and its later recall are indeed trains of action potentials, but in-between, and persisting for long periods, is a stable physical structural entity that holds that memory. In this sense, a map of all the things the brain has put to memory is found in the structure—the connectional map."

The first sentence is groundless dogma. There is no evidence that brains "can encode experiences and learned skills in a form that persists for decades or longer."  There is merely the fact that humans can have experiences and learn skills that they remember for decades.  The second sentence is a confession that there is no understanding of how such a brain storage of memories can happen. The authors confess that "the physical instantiation of such stable traces of activity is not known,"  The claim that memories are stored by "the specific pattern of connections between nerve cells" is empty hand-waving, and the speculation stated is unbelievable. No one who has ever studied the connections between nerve cells (neurons) has ever seen anything like some symbolic pattern that could encode a record of human experiences or human learned skills or learned conceptual knowledge such as school learning.  The brain does not have any such thing as a connection pattern reader that could read and interpret such patterns if they existed. Moreover, the connections between brains are structural units too short-lived to explain human memories that reliably persist for decades. Synapses and the dendritic spines they connect to do not last for years, and the proteins they are made of have average lifetimes of only a few weeks. 

neuroscientist hand waving

Sunday, March 23, 2025

Y-Maze Memory Tests Are Almost as Unreliable as Freezing Behavior Tests

It cannot be said that the reliability of an experimental neuroscience paper is directly proportional to the reliability of measurement techniques it uses. There are various reasons why you might have an utterly unreliable neuroscience experiment that used reliable measurement techniques, such as the reason that the experiment may have used too-small a study group size to have produced a reliable result. But it can be said (roughly speaking) that the unreliability of an experimental neuroscience paper is directly proportional to the unreliability of any measurement techniques upon which the experiment depends. That is why when examining neuroscience experiments, we should always pay extremely close attention to whether the experiment used reliable measurement techniques. 

For decades very many neuroscience researchers have been senselessly using a ridiculously unreliable measurement technique: the case of "freezing behavior" estimations. "Freezing behavior" estimations occur in scientific experiments involving memory. "Freezing behavior" judgments work like this:

(1) A rodent is trained to fear some particular stimulus, such as a red-colored shock plate in his cage. 

(2)  At some later time (maybe days later) the same rodent is placed in a cage that has the stimulus that previously provoked fear (such as the shock plate). 

(3) Someone (or perhaps some software) attempts to judge what percent of a certain length of time (such as 30 seconds or 60 seconds or maybe even four minutes) the rodent is immobile after being placed in the cage. Immobility of the rodent is interpreted as "freezing behavior" in which the rodent is "frozen in fear" because it remembered the fear-causing stimulus such as the shock plate. The percentage of time the rodent is immobile is interpreted as a measurement of how strongly the rodent remembers the fear stimulus. 

This is a ridiculously subjective and inaccurate way of measuring whether a rodent remembers the fear stimulus. There are numerous problems with this technique, which I explain in my post " All Papers Relying on Rodent 'Freezing Behavior' Estimations Are Junk Science." The technique is so unreliable that all experimental neuroscience studies relying on such a technique should be dismissed as worthless. 

There are other techniques used in neuroscience experiments. There are various types of maze techniques used.  A mouse may be trained to find some food that requires traversing a particular maze. It is easy to time exactly how long the mouse takes to find the food, after a series of training trials.  Then some modification might be made to the mouse (such as giving it an injection or removing part of its brain). The mouse can be put again in the maze, and a measurement can be made of how long it takes to find the food. It if took much longer to find the food, this might be evidence of a reduction in memory or learned knowledge. 

mouse and maze

This seems like a pretty reliable technique. But there's another much less reliable technique called the "free exploratory paradigm." When this technique is used, a mouse is given some compartments to explore. The mouse is first only allowed to explore half or two-thirds of the compartments.  Then later the mouse is given the freedom to explore all of the compartments, including previously unexplored compartments.  Some attempt is made to measure what percent of the time the mouse spends in the never-previously-explored compartments compared to the previously explored compartments. 

A figure in the paper "The free-exploratory paradigm as a model of trait anxiety in rats: Test–retest reliability" shows how this method might be used.  First the mouse is allowed to explore only the three compartments on the right, with access to the left compartments blocked. Then the mouse is allowed to access all of the compartments, and some attempt is made to judge whether the mouse spent more time in the left compartments than the right. 

The assumption is made that this can be some kind of test of memory. The experiment designers seem to have assumed that when a mouse goes to compartments already visited, the mouse will kind of recognize those compartments, and be less likely to explore them, perhaps having some kind of "I need not explore something I've already explored" experience. This is a very dubious assumption. 

It's as if the designers of this apparatus were assuming that a mouse is thinking something like this:

"My, my, these experimenter guys have given me six compartments to explore!  Well, there's no point in exploring any of the three compartments I already explored.  Been there, done that. So I guess I'll spend more time exploring the compartments I have not been to. I'm sure there will just be exactly the same stuff in the three compartments I've already explored, and that I need not spend any time re-exploring them to check whether there's something new in them." 

The assumptions behind this experimental design seem very dubious. It is not at all clear that a mouse would have any such tendency to recognize previous compartments the mouse had been in, and to think that such previously visited compartments were less worthy of exploration. 

The best way to test whether such assumptions are correct is by experimentation. Without doing anything to modify a mouse's memory, you can simply test normal mice, and see whether they are less likely to spend time in compartments they previously visited. Figure 2 of the paper "The free-exploratory paradigm as a model of trait anxiety in rats: Test–retest reliability" gives us a good graph testing how reliable this "free-exploratory paradigm" is, using a 10-minute observation period. The test involved 30 mice:

The figure suggests that this "free-exploratory paradigm" is not a very reliable technique for judging whether mice remembered something. In the first test, there was no tendency of the mice to spend more time exploring the unexplored compartments. In the second test there was only a slightly greater tendency of the mice to explore the previously unexplored compartments. Overall the mice spent only 55 percent of their time in the previously unexplored compartments, versus 45 percent of their time in the previously explored compartments. 

What is the relevance of this? It means that any neuroscience experiment that is based on this "free-exploratory paradigm" and fails to use a very large study group size is worthless.  An example of a worthless study based on such a technique is the study hailed by a press release this year, one with a headline of "Boosting brain’s waste removal system improves memory in old mice." No good evidence for any such thing was produced. 

The press release is promoting a study called "Meningeal lymphatics-microglia axis regulates synaptic physiology" which you can read here. That study all hinges upon an attempt to measure recall or recognition by mice, using something called a Y-maze, which consists of 3 compartments, the overall structure being shaped like the letter Y. The Y-maze (not actually a maze) is an implementation of the unreliable "free-exploratory paradigm"  measurement technique described above.  The study used a study group size of only 17 mice. But since the "free-exploratory paradigm" requires study group sizes much larger than 17 to provide any compelling evidence for anything, the study utterly fails as reliable evidence. 

Using a binomial probability calculator, we can compute the chance of getting a false alarm, using a measurement technique like the  "free-exploratory paradigm." Figure 1C of the paper "Meningeal lymphatics-microglia axis regulates synaptic physiology" shows only a very slight difference between the "free-exploratory paradigm" performance for the modified mice and the unmodified mice:

Given this "free-exploratory paradigm" that is something like only 55% effective in measuring recognition memory, the probability of getting results like this by chance (even if the experimental intervention has no real effect) is roughly the same as what we see in the calculation below:

Produced using the calculator here

The chance of getting purely by chance a result like the result reported in the paper is roughly the 1 in 3 shown in the bottom line above. When we consider publication bias and the "file drawer" effect, getting a result like the reported result means nothing. Why? Because it would be merely necessary to try the experiment a few times before you could report a success, even if the experimental intervention had no effectiveness whatsoever. 

We should never be persuaded by results like this, because what could easily be happening is something like this:

  • Team 1 at some college tries this intervention, seeing no effect. Realizing null results are hard to get published, Team 1 files its results in its file drawer. 
  • Team 2 at some other college tries this intervention, seeing no effect. Realizing null results are hard to get published, Team 2 files its results in its file drawer. 
  • Team 3 tries this intervention, seeing a "statistically significant" effect of a type you would get in maybe 1 time in three tries. Team 3 submits its positive result for publication, and gets a paper published. 
In a scenario like the one above, there is no real evidence for the effect. All that is happening is a result like what we would expect to get by chance, even if the effect does not exist. 

What we must also consider is that any researcher wanting to tilt the scales a bit can do so when using this free-exploratory paradigm. When these type of experiments are done, the compartments are not empty. Instead some items are put in the compartments.  There is no standard protocol about what is put in the compartments. A researcher can put in the compartments anything he wants. Each compartment is supposed to have a few items, but there is no standard number or size of items to use. So imagine you are trying to show what looks like a loss of memory recognition in some experiment using this free-exploratory paradigm.  All you need to do is put some less interesting items or fewer items in the unexplored compartments. And if you want to show what looks like an improvement in memory, you need merely put some more interesting items or more items in the  unexplored compartments. Since there is no standard protocol used using this free-exploratory paradigm, an experimenter can get whatever result he wants, by varying conditions in the compartments. 

At the top I give a graph from the the paper "The free-exploratory paradigm as a model of trait anxiety in rats: Test–retest reliability," which showed a mere 55% reliability using this free-exploratory paradigm in ten minute tests, but a greater reliability with 15 minute tests. How long a time length does the paper "Meningeal lymphatics-microglia axis regulates synaptic physiology" use? Only 2 or 3 minutes. I doubt very much that there is any evidence that such tests have much more than 50% reliability with such a short time span. This is a common defect of both the free-exploratory paradigm and the "freezing behavior" approach: they can produce wildly different results depending on the time interval used. And since there is no standard for a time interval used, an experimenter can use any time interval, including some interval that has not been verified as having any decent reliability. This is all the more reason to think that such methods are "see whatever you are hoping to see" affairs that have no validity as solid measurement techniques for measuring recall or recognition in rodents. I can imagine how things might work: an animal may be tested for 10 minutes using either technique; and if the experimenter doesn't like the result in the full ten minutes, he can simply report in his paper on the first 5 minutes; and if he does not like that result, he can report in his paper on only the first three minutes; and so on and so forth. If the paper is not a pre-registered paper committing itself to an exact detailed observational protocol, an experimenter can get away with that; and few neuroscience experiments these days follow such a pre-registered approach. Today's experimental neuroscience is such a standard-weak freewheeling farce of loose and bad methods that it is probably considered permissible to gather a particular type of data for ten minutes, and then report on only the results gathered in any arbitrary fraction of those minutes, as long as you start from the beginning. 

In the paper here, it says, "In the Y-maze
continuous procedure, the rat or mouse is placed in the maze for a
defined period (typically 5 min) and the sequence of arm choices
is recorded." But in the paper "Meningeal lymphatics-microglia axis regulates synaptic physiology" discussed above, the Y-maze test time was only 3 minutes; so we have a deviation from the typical procedure with this device. The same paper tells us "Hippocampectomized animals notoriously adopt side preferences, e.g., always turning right on a T-maze," something we can suspect may also be true in a Y-maze, giving another reason for doubting the suitability of such tests (both examples of the free exploratory paradigm) for testing memory modifications such as hippocampus lesions. 

The sad truth is that experiments done with this free-exploratory paradigm (such as a Y-maze experiment or a T-maze experiment) are worthless unless they use large study group sizes of at least 30 subjects per study group, and also an exact protocol that has been proven to be a reliable method of measuring recall or recognition in rodents. So we can have no confidence in the results reported by the  study referred to above, the one called "Meningeal lymphatics-microglia axis regulates synaptic physiology" which you can read here. That study all hinges upon an attempt to measure recall or recognition using the free exploratory paradigm, but does not use a large enough study group size to produce a reliable result using that paradigm. And we have no evidence of exactly following a precise protocol proven to be a reliable measure of rodent recall. 

Neither the free-exploratory paradigm (such as Y-maze experiments) nor "freezing behavior" experiments produce reliable results when anyone uses study group sizes smaller than 30. Both are poor, unreliable ways of measuring recall or recognition in rodents, allowing so much flexibility and opportunity for bias that it's just a "see whatever you want to see" type of affair. But what kind of methods tend to produce good, reliable results in measuring recall in rodents? I can think of four:

(1) A "find the food reward" maze technique like the one described above, in which you measure how many seconds a rodent takes to find a food reward, using a maze the rodent had been previously trained on to find a food reward. 
(2) The Morris water maze test, a widely used test that is not really a maze test, but a test of how well a rodent will remember to find a submerged platform after previously being trained to find that platform in a water tank. However a scientific paper cautions that the Morris water maze test may not work well with many strain of mice, saying this: "Neuroscientists have been warned that many strains [of mice] perform poorly on the submerged-platform water escape test task, which is better suited to rats than to mice, yet it is used widely for the study of memory in mice."  Another paper gives  a similar reason for thinking that the Morris water maze test (MWM) may only be suitable for rats, stating this: "Interestingly, when MWM data were analyzed in a large dataset of 1500 mice by factor analysis, the principle factors
affecting MWM performance in mice were noncognitive
(Lipp and Wolfer 1998).... It is important to note
that this is not the case in rats, but the fact that performance
factors are salient in mice provides an important cautionary
note when interpreting mouse MWM data." 
(3) A fear recall technique, measuring spikes in heart rate. The heart rate of a mouse will very dramatically spike when the mouse is afraid. So a mouse can be trained to fear some painful stimulus such as a shock plate. Then the mouse can be placed in a cage that has the fear-inducing stimulus. If the mouse's heart rate speeds up very much, that is good evidence that the mouse has remembered the fear-inducing stimulus such as the shock plate. 
(4) The Fear Stimulus Avoidance technique depicted below, which does not require heart-rate measurement. After being trained to fear some fearful stimulus such as a shock plate, the mouse can be placed in a cage that offers two paths to a food reward: one path that requires going through the fearful stimulus such as a shock plate, and another other path to the food reward that is physically much harder to traverse, such as a path requiring climbing steep stairs. If a rodent takes the much harder path to get to the food reward, that is good evidence that it remembered the pain caused by the fearful stimulus such as the shock plate. 

good way to measure recall in rodents

The mere use of a more reliable measurement technique does not guarantee a reliable result. While the Morris water maze test seems to be a reliable test when used with rats, it must be used with a big enough study group size, and very many neuroscience experimenters fail to do that. A paper notes the problem, stating this about the Morris Water Maze test (MWM):

"Many MWM experiments are reported with small group sizes. In our experience with the MWM and other water mazes, group sizes less than 10 can be unreliable and we use 15 to 20 animals per group, especially for mice, whose performance in learning and memory tests tends to be more variable than for rats. It is noteworthy that regulatory authorities require that safety studies have 20 or 25 animals per group. This number is for each of at least four groups (control and three dose levels) (Food and Drug Administration 2007; Gad 2009; Tyl and Marr 2012). Such group sizes are used by the US Environmental Protection Agency, the US Food and Drug Administration, the Organization for Economic Cooperation and Development, and Japanese and European Union regulatory agencies. Although the 3 Rs (reduce, refine, and replace) are worthwhile goals in the use of animals in research, it is not a justification to underpower experiments and run the risk of false positives, which, in the long run, cost more time, more animals, and more money to prove or disprove."

Postscript:  The term "spontaneous alternation behavior" is used to describe a case in which a rodent that has explored one arm of a T-maze or Y-maze is exposed to the maze again, and switches to a different arm. The higher the average "spontaneous alternation percentage" is (the higher above 50%) in control rodents, the more reliable such a T-maze or Y-maze is as a test of memory; and a well-established average "spontaneous alternation percentage" of maybe 75% would indicate a pretty good test. The graph here shows female controls showing such behavior only 55% of the time, and male controls showing such behavior 60% of the time; but the sample size is only 5.  Figure 4 of the paper here shows control rats using such "spontaneous alternation behavior" only 50% of the time in a Y-maze. The sample size is only 6. The graph here shows only about 60% "spontaneous alternation behavior" for 9 control rodents tested with a Y-maze. Figure 3 of the paper here shows male control rodents showing such "spontaneous alternation behavior" only about 50% of the time.  Figure 1 of the paper here shows a "spontaneous alternation percentage" of only about 57% for 6 control mice. In Figure 1 of the paper here, the "spontaneous alternation percentage" is only 35% in control rodents. These results are consistent with my claim above that such tests are not-very-reliable tests requiring large study group sizes to produce even borderline, modest evidence of a memory effect. 

The problem with a measurement technique that only gives you the right answer about 60% of the time is that when using such a technique it is really easy to get false alarms, particularly with small study group sizes. So you have no basis for strong confidence in some study testing only about 15 rats using a T-maze or a Y-maze. 

Friday, March 21, 2025

Claimed Evidence for "Concept Cells" Is Just Noise-Mining Nonsense

 Quanta Magazine is a widely-read online magazine with slick graphics. On topics of science the magazine again and again is guilty of the most glaring failures.  The articles at Quanta Magazine often contain misleading prose, groundless boasts or the most glaring falsehoods. I discuss some examples of such poor journalism in my posts here and  here and here and here and here.

The latest piece of nonsense in Quanta Magazine is an article trying to persuade us that scientists have discovered "concept cells" in the brain. No such thing has occurred. What is mainly going on is noise mining,  the morally dubious exploitation of very sick epilepsy patients, and scientists trying to get citations and attention by wrongly applying unjustified nicknames to cells. 

The research discussed works like this:

(1) Electrodes are implanted in the brains of very sick epilepsy patients requiring surgery for epilepsy, supposedly for the sake of surgical evaluation on where to perform the surgery (although we should suspect that additional electrodes are being implanted so that this type of noise-mining research can be done). 

(2) The patients are shown some visual stimuli, and EEG readings of their brain waves are taken while they see this stimuli. 

(3) The data is then analyzed, with searches made for particular neurons that fired more often when some particular image was seen.  Scientists then make a triumphal declaration that a "concept cell" was found, on the basis of the claim that some neuron was firing more frequently than we would expect when some visual stimulus was seen. 

This is noise-mining, like someone searching 1000 photos of clouds looking for one that looks like the ghost of an animal.  Neurons in the brain fire continuously, at a rate between 1 time per second and 200 times per second. Anyone tracking the firing of 300 neurons while someone is seeing different things will be able to find a few neurons that seemed to fire more often when something was seen. Similarly, if someone records the ups and downs of 300 stocks on the New York Stock exchange, and tries to correlate them with the occurrence of images coming from his television set showing an old movie, he will be able to find a few stocks that went up or down more often when some particular image was displayed. But that would be mere noise-mining. 

It is never justified to speak of single-neuron "responses" to a concept or an experience.  A single neuron does not respond to something a  person sees or recalls or thinks about. A neuron fires continuously at a rate of about 1 time per second or more, with random variations. It is always misleading to try and suggest a stimulus and response relation between a neuron firing and something someone saw or thought of or recalled.  This is like tracking many flu-infected people who each cough hundreds of times a day, and boasting about having found some "concept coughs," claiming that some of the coughs are a "response" to an image a person sees on a TV.  

The main paper discussed is the late 2024 paper "Concept and location neurons in the human brain provide the 'what' and 'where' in memory formation." The paper does nothing to find any evidence of memories stored in brains. All that is going is noise-mining of 3681 neuron firings in 13 epilepsy patients who had electrodes implanted in their brains. 

We should have no trust in the statistical analysis done in the paper, which is largely dependent upon a large body of programming code that looks like it is poorly written, and is performing all kinds of obscure or arbitrary convolutions and manipulations of data. You can see the programming code here. The EEG data is being processed in many a strange way, and is being passed through all kinds of contortion processes including many  doubly-nested loops doing God-only-knows-what, as in this example:

for k=1:numXbins

    for j=1:numYbins

        n(k,j)= sum ( wavs(:,k) <= ybins(j)+ybinSize/2 & wavs(:,k) > ybins(j)-ybinSize/2);

    end

end

No peer reviewer could ever untangle the programmatic "witches' brew" that is going on in the programming code of this paper. 

torturing data until it confesses

 A look at the peer reviewer comments on the paper gives us some hints about the paper's defects. One peer-reviewer asks this:

"These recordings come from epilepsy patients. How were possible epilepsy-related confounds mitigated?"

Here's what this comment refers to: epilepsy patients scheduled for surgery have all kinds of weird brain wave anomalies cropping up in their brain waves as recorded by EEG devices.  The possibilities for getting false alarms from EEG readings from epilepsy patients is endless. The paper authors respond to this question in an unconvincing way, by claiming that they were using EEG analysis software that did something to reduce such a problem. It is not a convincing response. 

When we read calculations of p-values in papers like this, we typically get no detailed discussion of how such a p-value was computed. We should have little trust in the accuracy of the calculated p-value. In the case of this paper, one of the peer reviewers says that he did his own calculation, and got a p-value number drastically different from  one of the p-values published in the paper.  

brain wave noise mining
The game of "keep torturing the data until it confesses"

This experiment was based on nonsensical assumptions.  The brain has billions of neurons and trillions of synapses. There never was any reason to believe that studying the firing rate of any single neuron  during the observation of sights by subjects would produce any evidence that such a neuron encodes or recognizes or represents or is sensitive to any concept. The idea that you would get meaningful evidence of such a thing from analyzing the firing of only 3681  randomly selected neurons (from a brain of billions of neurons) never made any sense, particularly given the extremely large random variations in the firings of noisy neurons. If an individual neuron encoded a concept, we would expect that an experiment such as this would have less than 1 chance in a million of success, since 3681 is less than a millionth of the total number of neurons in a brain. 

In any study in which subjects are moving muscles, you can never assume that some neuron has some connection to a concept because greater firing activity occurred when that concept was displayed or chosen.  It is well-known that muscle movements abundantly contaminate EEG readings that indicate how much neurons are firing. So in any study in which subjects are not perfectly immobile, you have no way of knowing whether some increased neuron firing is merely due to some increase or difference in muscle movement. In this study subjects were not perfectly immobile when the EEG readings occurred. They were instead performing activities with their hands. For example, we read, "The participant was asked to confirm every image location by tapping it within the presentation time window (1.5–3.5 s)."

The idea of a concept cell is nonsense. No one has any coherent idea of how a single cell could represent a concept or why a cell would have a tendency to respond more frequently when a viewer is exposed to just one concept. Since the  paper is so dependent upon black-box spaghetti code that is programmatically fooling around with their gathered data in many strange and tangled ways, no one should have any confidence that evidence of a "concept cell" was found. 

Below is a visual (Figure 2) from a scientific paper that did single-cell recordings of the firings of individual neurons in monkeys. Each one of the vertical bars represents a spike in the firing rate of a neuron. There is no clear relation between the firing rate and the stimulus presented to the monkey,  The "clumpy-bursty" example was one cherry-picked to show the strongest evidence of response to the stimulus.  A visual like this makes clear that spikes or blips in the firing rate of a neuron occur randomly many times a day. It is deceptive to pick out a case where one of these blips occurs when the stimulus occurred, and to call such a blip a "response" to the stimulus. But such a deception is what is occurring in papers claiming to have found "concept cells." 

blips in neuron firing rates

The paper discussed above ("Concept and location neurons in the human brain provide the 'what' and 'where' in memory formation")  gives us no assertion that the microelectrodes implanted in the brains of these very sick epilepsy patients were inserted only for medical reasons, to evaluate where they should have surgery. Since we have got no such assertion, we should suspect that one or more of the microelectrodes were unnecessarily implanted in the brains of very sick patients, for the sake of this poor-quality study based on nonsensical assumptions. Implantation of microelectrodes in the brain comes with very serious risks.  People requiring epilepsy surgery are very sick people who should not be put at higher risk for the sake of low-quality research such as this. 

The paper discussed above implanted many microelectrodes in epilepsy patients. The type of electrode normally used for surgical evaluation of epilepsy patients is a much larger type of electrode called a macroelectrode. A scientific paper tells us, "Sixty-five years after single units were first recorded in the human brain, there remain no established clinical indications for microelectrode recordings in the presurgical evaluation of patients with epilepsy (Cash and Hochberg, 2015)." In other words, there is no medical justification for implanting microelectrodes in the brains of epilepsy patients. The paper tells us that the microelectrodes were "inserted through the hollow clinical macro electrodes, and protruding from the tips by ~4 mm." This was medically unnecessary and potentially hazardous insertion of many wires into the brains of very sick patients.  We seem to have here some very sick patients being put at needless risk merely so that junk science can be produced. 

reckless neuroscientist

Here is a quote from a scientific paper:

"The effects of penetrating microelectrode implantation on brain tissues according to the literature data...  are as follows:

  1. Disruption of the blood–brain barrier (BBB);
  2. Tissue deformation;
  3. Scarring of the brain tissue around the implant, i.e., gliosis 
  4. Chronic inflammation after microelectrode implantation;
  5. Neuronal cells loss."
I strongly advise any people who participated in any brain scanning experiment or any neuroscience experiment involving electrode implants to permanently keep very careful records of their participation, to find out and write down the name of the scientific paper corresponding to the study, to write down and keep the names of any scientists or helpers they were involved with, to permanently keep a copy of any forms they signed, and to keep a very careful log of any health problems they ever have. Such information may be useful should such a person decide to file a lawsuit or a claim seeking monetary damages. 

Postscript: The term "gnostic cells" has sometimes been used to mean the same thing as "concept cells."

More noise-mining nonsense is found in the recent paper "Lack of context modulation in human single neuron responses in the medial temporal lobe," which you can read here. This time the study group is even smaller, consisting of only 9 very sick patients.  The authors have failed to make their code publicly available through any easy access method, as if they were embarrassed by their programming. But using the link here you can download their code as a .zip file, scan the .zip file for viruses, and then extract it, which is a laborious way to have to inspect code. I did that, and found the usual very-poorly-documented spaghetti code programming horror that you typically find in projects like this. We have undocumented loops like the one below, doing some unfathomable rigmarole contortions of the brain wave data:

for iresp=1:size(resp,2)
    if ~isempty(resp(iresp).trecall_phasic_ms)
        resp_rec = resp_rec + 1;
        active_cluster = resp(iresp).spike_times_Rec;
        phasic_recall = resp(iresp).trecall_phasic_ms;
        ntrials(1) = size(active_cluster{resp(iresp).responsive_Storiesindex(1)},1);
        ntrials(2) = size(active_cluster{resp(iresp).responsive_Storiesindex(2)},1);
        strength_pair = NaN*ones(max(ntrials),2);
    
        equiv_strength(resp_rec).chan = resp(iresp).channel_number;
        equiv_strength(resp_rec).class = resp(iresp).cluster;
        equiv_strength(resp_rec).pair = resp(iresp).responsive_Storiesindex;
            
        for istim=1:2 
            phas_vec = phasic_recall{resp(iresp).responsive_Storiesindex(istim)};
            spikes1 = active_cluster{resp(iresp).responsive_Storiesindex(istim)};
            spikes1 = arrayfun(@(k) spikes1{k}-phas_vec(k),[1:length(phas_vec)]','UniformOutput',false);
            strength_pair(1:ntrials(istim),istim) = cell2mat(cellfun(@(x) sum((x< twin_off) & (x> twin_on)),spikes1,'UniformOutput',0));
        end
            
    
        nsamp = min(ntrials);
        equiv_strength(resp_rec).samples = ntrials;
        equiv_strength(resp_rec).meandiff_Hz = (diff(nanmean(strength_pair)))/((twin_off-twin_on)/1000);
        equiv_strength(resp_rec).delta = sqrt(2/nsamp*(norminv(alpha)+norminv(alpha/2))^2);
        [equiv_strength(resp_rec).test_resu, equiv_strength(resp_rec).pval, equiv_strength(resp_rec).pooledSD, equiv_strength(resp_rec).meandiff] = TOST_2023(strength_pair(~isnan(strength_pair(:,1)),1), strength_pair(~isnan(strength_pair(:,2)),2), 'welch',equiv_strength(resp_rec).delta,alpha);
    end
end

And we have the equally ugly unfathomable bit of monkey business below:

for ii=1:length(spikes)
        if iscell(spikes{ii})
            all_spks=cell2mat(spikes{ii}');
        else
            all_spks=spikes{ii}(:);
        end

        spikes_tot=all_spks(all_spks < tmax_epoch+half_ancho_gauss & all_spks > tmin_epoch-half_ancho_gauss);
        spike_timeline = hist(spikes_tot,(tmin_epoch-half_ancho_gauss:sample_period:tmax_epoch+half_ancho_gauss))/ntrials(ii);
        n_spike_timeline = length(spike_timeline); %should be the same length as ejex
        integ_timeline_stim = conv(spike_timeline, int_window);
        integ_timeline_stim_cut = integ_timeline_stim(round(half_ancho_gauss/sample_period)+1:n_spike_timeline+round(half_ancho_gauss/sample_period));
        aver_fr{ii} =  smooth(integ_timeline_stim_cut(which_times),smooth_bin);

        % subplot(length(spikes),1,ii)
        % plot(ejex(which_times(1:downs:end)),aver_fr{ii}(1:downs:end))
        % maxi(ii) = max(aver_fr{ii});
    end  

I can merely say the more time a programmer spends looking at the programming code used by this paper, the less confidence he will have that the paper did anything to establish the existence of "concept cells." This is black box "witches' brew" monkeying around with brain wave data that is best described with the phrase "they kept torturing the data until it confessed in the weakest whisper."

A visual from the paper gives you the type of data that was being tortured to try to get something. Random cases of more noise blips from noisy and extremely variable neurons (during a few seconds) are being passed off as "concept responses." It's like someone tracking each and every noise blip from a nearby street construction crew using a jackhammer,  and trying to correlate particular noise spikes with images appearing on his TV set. That would be a very silly case of noise mining, and what is going on in this paper is just as silly. 


One of the authors of this poor-quality paper is Rodrigo Quian Quiroga, who has long been quoted as making a misleading claim about a "Jennifer Aniston" concept cell.  For example:
  • In a 2017 article "Concept cells: the building blocks of declarative memory functions" by  Quian Quiroga, he incorrectly stated, "One of the first such neurons found in the hippocampus fired to seven different pictures of the actress Jennifer Aniston and not to 80 other pictures of known and unknown people, animals and places." This was a claim that did not match the data in his 2005 paper where we have in Figure 1A a visual depiction of more than 50 firings of that neuron when the subject was shown a picture other than Jennifer Aniston.
  • In Figure 1 of the year 2020 "Searching for the neural correlates of human intelligence" article by Quian Quiroga, he misleadingly states that the Jennifer Aniston neuron "did not respond to about 80 pictures of other persons," a claim which does not match the data in his 2005 paper where we have in Figure 1A a visual depiction of more than 50 firings of that neuron when the subject was shown a picture other than Jennifer Aniston.  
  • In a 2025 Salon article with many misstatements,  Quian Quiroga made this incorrect statement: "'Twenty years ago … I was doing experiments with a patient, and then I showed many pictures of Jennifer Aniston, and I found a neuron that responded only to her and to nothing else...It was very clear that in an area called the hippocampus that is known to be critical for memory, we have neurons that represent, in this case, specific people, or in general, specific concepts. "  The claim does not match what was reported in Quian Quiroga's 2005 paper, where  we have in Figure 1A a visual depiction of more than 50 firings of that neuron when the subject was shown a picture other than Jennifer Aniston.  
 In fact, Figure 1A of the 2005 paper shows that the famed "Jennifer Aniston neuron" did fire more than seven times when a picture of a basketball player was shown, that it did fire more than seven times when a picture of another basketball player was shown, that it did fire more than five times when a picture of a snake was shown, and that it did fire more than six times when a picture of the Leaning Tower of Pisa was shown. The paper says this: "To hold their attention, patients had to perform a simple task during all sessions (indicating with a key press whether a human face was present in the image)."  It is known that the muscle movements can abundantly affect EEG readings and whether a neuron fires. A difference between key presses when the Jennifer Aniston picture was shown and things other than faces were shown can easily account for why one neuron may have fired differently when pictures of Jennifer were shown, without any need at all to evoke an idea of a "concept cell" for Jennifer Aniston. Maybe Quian Quiroga's misstatements on his "Jennifer Aniston neuron" had something to do with the fact that he later managed to get a book deal, a deal for a book with a title mentioning his claimed "Jennifer Aniston" neuron. 

When he discusses this "Jennifer Aniston neuron" outside of the original paper, Quian Quiroga seems to never mention that the subject with that neuron was asked to press a button whenever he saw a face, and that muscle movements are known to increase brain wave spikes picked up by electrodes or EEG devices.  That seems like the "secret sauce" behind his "Jennifer Aniston" neuron. 

There is nothing very impressive about this case of the neuron's firing more often while someone saw a few pictures of Jennifer Aniston. While unlikely to occur on any one day, it is the kind of result you would expect an eager noise miner to produce after spending very long periods of time looking for some result better than chance in a dataset of random data. Similarly, if someone is very eager to find a cloud shape looking like the ghost of an animal, and he spends day after day scanning photos of clouds looking for such a thing, he will probably find one or two clouds that look rather like animal ghosts. Activity like this is correctly described as misguided noise-mining "Jesus in my toast" pareidolia, aided by misstatements in which the meager result is described in a misleading way.