Saturday, September 30, 2023

The Human Brain Project Spent Billions, But Failed to Bolster Claims About Brains Making Minds

In 2013 the European Union launched a 2.7 billion dollar Human Brain Project. Its incredibly ambitious goal was to create a super-computer model of the human brain. The EU's Human Brain Project announced goals of proving conventional dogmas about the brain. At the page here we read that "the HBP is conducting a coordinated series of experiments to identify the neuronal mechanisms behind episodic memory, and validate them by computational models and robotic systems."  This is an assertion of the unproven dogma that episodic memory can be explained by brain processes; and it is a goofy statement, given how silly it is to think that such a dogma could be validated by doing computer models or research into robots. 

Today (September 30, 2030) the Human Brain Project has officially concluded. It has released a statement here summarizing what it accomplished in its ten years of heavy funding, a statement entitled "The Human Brain Project ends: What has been achieved." We read the following statements that fail to mention any notable discoveries about the brain:

  • "The HBP has produced more than 3000 academic publications and more than 160 digital tools, medical and technological applications, an open research infrastructure – EBRAINS – as well as a multinational and uniquely interdisciplinary community that would not have come together otherwise."
  • "The HBP has driven outstanding advances in brain research and in the development of medicine and technology applications. Among the research highlights accomplished by the HBP are the world-leading 3D atlases of the brain, breakthroughs in personalised brain medicine, and the development of new brain-inspired technologies, e.g., in artificial intelligence and neuromorphic computing."
  • "The HBP has built a digital platform that fosters large-scale collaborations. The EBRAINS digital research infrastructure offers access to digital tools, models, data and services, facilitating the integration of brain science across disciplines and national borders."
There is no mention in the wrap-up document of any specific discovery that was made about the brain from all this research. The document does not claim that anything was done to back up the main claims neuroscientists make about brain, such as the claims that brains are the source of the human and the storage place of memories. 

The wrap-up document does have a link to a scientific paper. We read this:

"Over the last year, a position paper initiated by the HBP about the future of digital neuroscience has been collaboratively written by around 100 international authors from inside and outside of the project. 'The coming decade of digital brain research – A vision for neuroscience at the intersection of technology and computing' has been published, with an executive summary outlining the main points." 

The executive summary document makes no claims about specific important discoveries about the brain made by the Human Brain Project.  The document makes this untrue claim: "Using models of cerebellar, hippocampal and sensory areas, scientists are building robots increasingly capable of exploring and learning from their environment, based on principles from embodied cognition." Scientists do not build robots based on things they have learned about brains. So called "neural nets" do not have an architecture matching any brain architecture, and the brain has nothing like the logic-heavy software that drives robots. DNA is sometimes wrongly compared to software, but it lacks the "if/then" logic that is at the center of software.  

The summary document discusses a "roadmap" of future research goals that includes these not-yet-accomplished items on a "to-do" list:

"1. Identify and integrate the rules of plasticity,
learning and adaptation into existing multilevel brain models.
2. Identify constraints of brain plasticity and tools
to modulate it for the benefit of patients.
3. Reveal mechanisms of memory consolidation
and translate this to medicine and technology."

It sounds just as if nothing has been done to verify any brain basis for learning, and that the authors are "crossing their fingers" hoping that they might be able to discover such a thing in the future. But if there was a brain mechanism for learning, wouldn't scientists have already discovered it after spending billions since 2013 on brain research?  How could scientists still have failed to discover a brain mechanism for learning, if it existed, when biochemists discovered the DNA basis of inheritance around 1955, and scientific instruments are so much more powerful today? 

In a square marked "Cognition and Behavior" we have these "to-do" items as goals of future research:

1. "Develop a coherent framework describing
the mechanisms of cognitive functions using
a multiscale perspective, from sensory- and
visuomotor to more complex cognitive functions.
2. Formulate a coherent framework for language,
as a uniquely human complex cognitive function, integrating insights from linguistics and neuroscientific research using multilevel brain
approaches, and using brain development as a
window to specialisation.
3. Link concepts of different hypotheses about
self-consciousness to each other and to mechanisms at the cellular, molecular and genetic levels."

It sounds just as if nothing has been done to verify any brain basis for cognitive functions, and that the authors are "crossing their fingers" hoping that they might be able to discover such a thing in the future. But if there was a brain mechanism for cognitive functions such as thinking and understanding, wouldn't scientists have already discovered it after spending billions since 2013 on brain research?

The full paper referred to by the wrap-up document (co-authored by 100+ neuroscientists)  states this confession: "To name but a few examples, the formation of memories and the basis of conscious  perception, crossing  the threshold  of  awareness, the  interplay  of  electrical  and  molecular-biochemical mechanisms of signal transduction at synapses, the role of glial cells in signal transduction and metabolism, the role of different brain states in the life-long reorganization of the synaptic structure or  the mechanism of how  cell  assemblies  generate a  concrete  cognitive  function are  all important processes that remain to be characterized." Got it: after spending billions, our neuroscientists are still empty-handed looking for a neural basis of memory and mind. 

What we have is a situation rather like scientists asking for billions to research finding crashed extraterrestrial spaceships at the bottom of Lake Tahoe, and then announcing (after spending those billions) that they hope in the next ten years to find evidence of a spaceship at the bottom of Lake Tahoe. Our neuroscientists of the Human Brain Project were given billions over ten years to back up their claims that brains store memories and that brains make minds. They failed to provide any evidence backing up such claims. And now they're like, "Don't worry -- we have those things on our to-do list." 

There are a couple of good principles to remember here. One is that if you don't know how, then you probably don't know if.  For example, do you not know how John Finkleheimer killed Sally Sorrow? Then you probably do not know that John Finkleheimer did kill Sally Sorrow. And if you don't know how brains can store memories or recall memories, then you sure don't know that brains do store memories or recall memories. You merely know that people store memories or recall memories.

Another good principle to remember is: if thousands of people spent billions of dollars looking for something in a human organ and didn't find it, then it probably doesn't exist in such an organ.  Many thousands of neuroscientists have used billions of dollars looking for some mechanism by which brains could store and retrieve memories, and they never found such a mechanism. They also never found any sign of learned information or human memories by studying brain tissue.  From such a failure we should conclude that such things probably do not exist in brains. 

Members of a rigid belief community with evidence-ignoring dogmas resembling the creed of some sect, our neuroscientists senselessly keep thinking to themselves, "To explain minds and memory, we must keep looking in the brain!"  A similar error might occur on a planet Evercloudy, one perpetually covered in thick clouds. Looking for the source of the heat that warms their planet, and being unable to see their sun, and lacking the imagination to postulate such a sun, the scientists on such a planet might follow a principle of "keep looking for heat sources on the ground or underground." But that would be a senseless principle. The main source of their planet's heat would be from something unseen, something outside of their planet. Since brain explanations for mind and memory are never credible, and since brains have very serious shortfalls which rule them out as explanations for mind and memory, we must look for a source of human minds outside of our bodies. Our scientists keep looking only for bottom-up explanations for minds, but they should be considering top-down explanations. 

Saturday, September 23, 2023

All Papers Relying on Rodent "Freezing Behavior" Estimations Are Junk Science

Normally we assume that when scientists do experiments, they want to measure things as accurately as possible. But that may not always be the case. There are some reasons why scientists may actually prefer to use a method that poorly measures something.  They include the following:

(1) Nowadays there exist these two very bad problems in science:  publication quotas and publication bias. Publication bias is the tendency of science journals to prefer to publish scientific papers that announce positive results showing some effect, rather than null results that fail to show any effect. Publication quotas are prevailing traditions in academia that every professor is supposed to have his name as an author on a certain number of papers by some particular point in his career. Often described under the name of "publish or perish," publication quotas are typically informal, but very real. An assistant professor may not be formally told that he has to have a certain number of papers on his name to become a full professor, but he will tend to know that his chance of advancement in academia will be very low if he does not have enough published papers on his resume. 

The combination of publication bias and publication quotas may create a strong preference for inaccurate and subjective measurement techniques. The more inaccurate and subjective a measurement technique, the greater the possibility of "see whatever you want to see," the greater the chance that the fervently desired "positive result" can be reported. 

(2)   Another very large problem in scientific research is ideological bias: the tendency of science publication to prefer papers that conform with the most popular ideas prevailing in research communities. Whenever an ideology is incorrect, it can be true that the more inaccurate and subjective a measurement technique, the greater the likelihood that the writer of a scientific paper can report some result that conforms with the ideology prevailing in his research community. 

Let us look at a case in which scientists for decades have been senselessly using a ridiculously unreliable measurement technique: the case of "freezing behavior" estimations. "Freezing behavior" estimations occur in scientific experiments involving memory. "Freezing behavior" judgments work like this:

(1) A rodent is trained to fear some particular stimulus, such as a red-colored shock plate in his cage. 

(2)  At some later time (maybe days later) the same rodent is placed in a cage that has the stimulus that previously provoked fear (such as the shock plate). 

(3) Someone (or perhaps some software) attempts to judge what percent of a certain length of time (such as 30 seconds or 60 seconds or maybe even four minutes) the rodent is immobile after being placed in the cage. Immobility of the rodent is interpreted as "freezing behavior" in which the rodent is "frozen in fear" because it remembered the fear-causing stimulus such as the shock plate. The percentage of time the rodent is immobile is interpreted as a measurement of how strongly the rodent remembers the fear stimulus. 

This is a ridiculously subjective and inaccurate way of measuring whether a rodent remembers the fear stimulus. There are numerous problems with this technique:

(1) There are two contradictory ways in which a rodent might physically respond after seeing something associated with fear: a flight response (in which the rodent attempts to escape) and a freezing response (in which the rodent freezes, not moving). It is all but impossible to disentangle which response is displayed when the rodent is presented with a fear stimulus. A rodent who remembers a fear stimulus might move around trying to escape the feared stimulus. But under the "freezing behavior" method, such movement would not be recorded as memory of the feared stimulus, even though the fear stimulus was recalled. 

(2) Rodents often have hard-to-judge movement behavior that neither seems like immobility nor fleeing behavior, and it is subjective and unreliable to judge whether such movement is or is not "freezing behavior" or immobility. 

(3) Movement of a rodent in a cage may be largely random, and not a good indication of whether the rodent is afraid and whether the rodent is recalling some fear stimulus. 

(4) Rodents encountering a fear-provoking stimulus in human homes (such as a mouse hearing a human shriek) almost never display freezing behavior, and much more commonly display fleeing behavior. I lived in a New York City apartment for many years in which I would suddenly encounter mice, maybe about 10 times a year. I never once saw a mouse freeze, but invariably saw them flee. 

(5) Freezing behavior in a rodent may  last for a mere instant, as in humans. So it may be extremely fallacious to do something such as trying to observe 30 seconds or 60 seconds or several minutes of rodent movement or non-movement, and try to judge whether fear or recall occurred  by judging a "freezing percentage" over such an interval. Almost all of that time may be random behavior having nothing to do with fear in the rodent or memory recall in the rodent. Contrary to all sensible methods, what we often seen in neuroscience papers is some technique in which someone tries to judge "freezing behavior" by judging non-movement over a length of several minutes. An example is the science paper here, in which the authors senselessly judge fear recall by estimating non-movement in a rodent over the span of four minutes. 

(6) Attempts to judge "freezing behavior" typically ignore a fact of key importance: whether the rodent avoided the stimulus the rodent was conditioned to fear. Let's imagine two cases. In case 1 a rodent put in a cage with a stimulus he was conditioned to fear (such as a shock plate)  spends most of the measured interval not moving, and then goes directly to the fear stimulus, such as stepping on the shock plate. In case 2 a rodent nervously moves around in the cage, entirely avoiding the fear stimulus such as a shock plate.  Clearly the rodent in case 2 acts like an animal who remembers the fear stimulus, and the animal in case 1 acts like an animal that does not remember the stimulus. But under the absurd method of judging fear recall by estimating "freezing behavior,"  the rodent in case 1 will be counted as better remembering the fear stimulus, because that rodent displayed more "freezing behavior."  This example shows how absurd "freezing behavior" estimations are as a measure of whether a rodent recalled something or feared something.  Obviously there's something very wrong if a technique can lead you to think that remembering rodents forget, and that forgetting rodents remembered. 

fallacy of freezing behavior estimation

How is it that memory and fear recall can reliably be measured in rodents? There are at least three techniques. One costs a little bit of money, and the other two can be done without spending much of anything. 

Reliable Way of Measuring Rodent Fear Recall #1: Measuring Heart Rate Spikes

It has been shown that when animals such as mice are exposed to fear-inducing stimuli, their heart rate dramatically spikes. According to a scientific paper, a simple air puff will cause a mouse's heart rate to increase from nearly 500 beats per minute to near 700 beats per minute. We read this: "The mean HR [heart rate] responses from the seven mice showed that HR increased significantly from the basal level of 494±27 bpm to 690±24 bpm to the first air puff (P<0.001)."  The same paper tells us that similar increases in heart rate occur when mice are dropped or subjected to a simulated earthquake by means of a little cage shaking. So rather than using the very unreliable method of trying to judge "freezing behavior" to determine how well a mouse remembered a fearful stimulus, scientists could use the reliable method of looking for sudden heart rate spikes. 

Reliable Way of Measuring Rodent Fear Recall #2: Tracking Fearful Stimulus Avoidance

The method described above has the slight drawback of requiring the purchase of rodent heart rate monitors.  But there's another method that does not have any such drawback: the method of simply recording whether a fearful stimulus was avoided. The method is shown in the diagram below. 


Using this technique, a mouse is trained to avoid a fear stimulus -- the red shock plate shown in the center of the diagram. At some later date the mouse (in a hungry state) is put into the cage. If the mouse does not remember that the shock plate will cause pain, the mouse will take the direct route to the cheese, which requires crossing over the shock plate. If the mouse does remember that the shock plate will cause pain, the mouse will take an indirect and harder route, requiring it to jump up and down a set of stairs.  This is an easy and foolproof method of testing memory recall in rodents. Here we have a nice binary result -- either the mouse touches the shock plate, or it doesn't. There's no subjective element at all. 

A Third Way of Measuring Rodent Recall: The Morris Water Maze Test

The widely used Morris Water Maze test is a fairly reliable way of measuring recall in rodents. The water maze consists of a circular open tank rather like a child's bathing tub, deeper than a rodent's length, with a hidden platform on one side of the tank, about an inch or two below the water surface. A rodent is placed in the tub, and has to tread water to stay alive. Eventually the rodent will discover that by swimming to the hidden platform the rodent can comfortably rest, without having to tread water.  You test the rodent's memory by exposing him to the water maze a certain number of times, until you find that the rodent immediately goes to the hidden platform.  Then later the rodent's memory can be tested by putting the rodent in the same Morris Water Maze tank, and seeing whether it quickly swims to the platform. The main drawback of the Morris Water Maze is that if something was done to a mouse to inhibit muscular skills but not memory, a mouse may fail the Morris Water Maze test even though there was no change in memory.  

Why Do Neuroscientists Continue to Use Unreliable "Freezing Behavior" Estimations for Judging Rodent Recall?

The methods discussed above are obviously superior to the error-prone and subjective "freezing behavior" estimation method. So why do experimental neuroscientists continue to cling to such a "freezing behavior" estimation method, using it so often? It is entirely reasonable to suspect that many neuroscientists cling to their "freezing behavior" method for the very reason that it is unreliable and subjective, allowing neuroscientists to see whatever they want to see. By clinging to unreliable "freezing behavior" estimation, neuroscientists have a better chance of being able to report some result they can call a positive result. 

A paper describing variations in how "freezing behavior" is judged reveals that no standard is being followed. The paper is entitled "Systematic Review and Methodological Considerations for the Use of Single Prolonged Stress and Fear Extinction Retention in Rodents." The paper has the section below telling us that statistical techniques to judge "freezing behavior" in rodents are "all over the map," with no standard statistical method being used:

"For example, studies using cued fear extinction retention testing with 10 cue presentations reported a variety of statistical methods to evaluate freezing during extinction retention. Within the studies evaluated, approaches have included the evaluation of freezing in individual trials, blocks of 2–4 trials, and subsets of trials separated across early and late phases of extinction retention. For example, a repeated measures analysis of variance (RMANOVA) of baseline and all 10 individual trials was used in Chen et al. (2018), while a RMANOVA was applied on 10 individual trials, without including baseline freezing, in Harada et al. (2008). Patterns of trial blocking have also been used for cued extinction retention testing across 10 trials, including blocks of 2 and 4 trials (Keller et al., 2015a). Comparisons within and across an early and late phase of testing have also been used, reflecting the secondary extinction process that occurs during extinction retention as animals are repeatedly re-exposed to the conditioned cue across the extinction retention trials. For example, an RMANOVA on trials separated into an early phase (first 5 trials) and late phase (last 5 trials) was used in Chen et al. (2018) and Chaby et al. (2019). Similarly, trials were averaged within an early and late phase and measured with separate ANOVAs (George et al., 2015). Knox et al. (2012a,b) also averaged trials within an early and late phase and compared across phases using a two factors design.

Baseline freezing, prior to the first extinction retention cue presentation, has been analyzed separately and can be increased by SPS (George et al., 2015) or not affected (Knox et al., 2012bKeller et al., 2015a). To account for potential individual differences in baseline freezing, researchers have calculated extinction indexes by subtracting baseline freezing from the average percent freezing across 10 cued extinction retention trials (Knox et al., 2012b). In humans, extinction retention indexes have been used to account for individual differences in the strength of the fear association acquired during cued fear conditioning (Milad et al., 20072009Rabinak et al., 2014McLaughlin et al., 2015) and the strength of cued extinction learning (Rabinak et al., 2014).

In contrast with the cued fear conditioning studies evaluated, some studies using contextual fear conditioning used repeated days of extinction training to assess retention across multiple exposures. In these studies, freezing was averaged within each day and analyzed with a RMANOVA or two-way ANOVA across days (Yamamoto et al., 2008Matsumoto et al., 2013Kataoka et al., 2018). Representative values for a trial day are generated using variable methodologies: the percentage of time generated using sampling over time with categorically handscoring of freezing (Kohda et al., 2007), percentage of time yielded by a continuous automated software (Harada et al., 2008), or total seconds spent freezing (Imanaka et al., 2006Iwamoto et al., 2007). Variability in data processing, trial blocking, and statistical analysis complicate meta-analysis efforts, such that it is challenging to effectively compare results of studies and generate effects size estimates despite similar methodologies."

As far as the techniques that are used to judge so-called "freezing behavior" in rodents, the techniques are "all over the map," with the widest variation between researchers. The paper tells us this:

"Another source of variability is the method for the detection of behavior during the trials (detailed in Table 1). Freezing behavior is quantified as a proxy for fear using manual scoring (36% of studies; 12/33), automated software (48% of studies; 16/33), or not specified in 5 studies (15%). Operational definitions of freezing were variable and provided in only 67% of studies (22/33), but were often explained as complete immobility except for movement necessary for respiration. Variability in freezing measurements, from the same experimental conditions, can derive from differential detection methods. For example, continuous vs. time sampling measurements, variation between scoring software, the operational definition of freezing, and the use of exclusion criteria (considerations detailed in section Recommendations for Freezing Detection and Data Analysis). Overall, 33% of studies did not state whether the freezing analysis was continuous or used a time sampling approach (11/33). Of those that did specify, 55% used continuous analysis and 45% used time sampling (12/33 and 10/33, respectively). Several software packages were used across the 33 studies evaluated: Anymaze (25%), Freezescan (14%), Dr. Rat Rodent's Behavior System (7%), Packwin 2.0 (4%), Freezeframe (4%), and Video Freeze (4%). Software packages vary in the level of validation for the detection of freezing and the number and role of automated vs. user-determined thresholds to define freezing. These features result in differential relationships between software vs. manually coded freezing behavior (Haines and Chuang, 1993Marchand et al., 2003Anagnostaras et al., 2010). Despite the high variability that can derive from software thresholds (Luyten et al., 2014), threshold settings are only occasionally reported (for example in fear conditioning following SPS). There are other software features that can also affect the concordance between freezing measure detected manually or using software, including whether background subtraction is used (Marchand et al., 2003) and the quality of the video recording (frames per second, lighting, background contrast, camera resolution, etc.; Pham et al., 2009), which were also rarely reported. These variables can be disseminated through published protocols, supplementary methods, or recorded in internal laboratory protocol documents to ensure consistency between experiments within a lab. Variability in software settings can determine whether or not group differences are detected (Luyten et al., 2014), and therefore it is difficult to assess the degree to which freezing quantification methods contribute to variability across SPS studies with the current level of detail in reporting. Meuth et al. (2013) tested the differences in freezing measurements across laboratories by providing laboratories with the same fear extinction videos to be evaluated under local conditions. They found that some discrepancies between laboratories in percent freezing detection reached 40% between observers, and discordance was high for both manual and automated freezing detection methods." 

It's very clear from the quotes above that once a neuroscience researcher has decided to use "freezing behavior" to judge fear, then he pretty much has a nice little "see whatever I want to see" situation. Since no standard protocol is being used in these estimations of so-called "freezing behavior," a neuroscientist can pretty much report exactly whatever he wants to see in regard to "freezing behavior," by just switching around the way in which "freezing behavior" is estimated, until the desired result appears. We should not make here the mistake of assuming that those using automated software for judging "freezing behavior" are getting objective results.  Most software has user-controlled options that a user can change to help him see whatever he wants to see. 

To help get reliable and reproducible results, neuroscientists doing experiments involving recall or fear recall in animals should use only a simple and reliable method for measuring fear or recall in rodents: either the measurement of heart rate spikes, or the Fear Stimulus Avoidance technique described above, or the Morris Water Maze test.  But alas, experimental neuroscientists seem to prefer to use an unreliable "see whatever you want to see" method, quite possibly because that vastly increases the opportunity for them to report "statistically significant" results or positive results rather than null results. 

What we must always remember is that the modern experimental neuroscientist is not primarily interested in producing accurate results, but is instead primarily interested in producing publishable results, defined as any result that will end up getting published in a scientific journal. The   modern experimental neuroscientist is also extremely interested in producing "citation magnet" results, defined as any results that will end up getting more paper citations.  Alas, today's neuroscientists are not judged by whether they use intelligent and accurate experimental methods. Today's neuroscientists are rather mindlessly judged by their peers on the basis of how many papers they can claim to have co-authored, and how many citations such papers have got. And so we see neuroscience papers like the one below, in which more than 100 scientists appear as the authors of a single paper, as if the main idea was just to up the paper count of as many people as possible. 

scientific paper with more than 100 authors

A simple rule should be followed about this matter: any and all papers writing up experimental research and depending upon  claims of freezing behavior by rodents should be regarded as junk science unworthy of serious attention. Trying to measure "freezing behavior" is not a reliable way of measuring memory recall or fear in rodents.  Very many of the most widely reported neuroscience studies rely on this junk method, and all such studies are junk studies.  A high use of "freezing behavior " estimation is only one of the glaring defects of neuroscience experimental research, where Questionable Research Practices are extremely common.  Other glaring procedural defects very common in neuroscience experimental research include the all-too-common use of way-too-small study group sizes, a failure to pre-register a hypothesis and methods to be used for gathering and analyzing data,  p-hacking,  a failure to follow blinding protocols, and a failure to do sample size calculations to determine how large study group sizes could be. 

You should not assume that peer review prevents bad neuroscience research from getting published.  The people who peer-review neuroscience research routinely fail to exclude poorly designed experimental research.  The peer reviewers of such research are typically neuroscientists who perform the same kind of poorly designed research themselves.  Peer reviewers senselessly follow a rule of "allow papers to be published if they resemble recent previously published papers."  When some group of scientists is following bad customs (such as we see massively in theoretical physics, theoretical phylogenetics, theoretical cosmology,  and experimental neuroscience),  such a rule completely fails to block junk research from being published. 

Postscript: The paper "To freeze or not to freeze" gives us additional reasons for disbelieving that "freezing behavior" judgments are reliable ways of measuring fear or recall in rodents.  We read that "Male and female rats respond to a fearful experience in different ways, but this was not previously taken into account in research." Below are some quotes:

"Gruene, Shansky and their colleagues – Katelyn Flick and Alexis Stefano of Northeastern, and Stephen Shea of Cold Spring Harbor Laboratories – found that instead of freezing, many female rats display a brief, high-velocity movement termed darting...Gruene et al. found that female rats performed more darts per minute than males. However, not all females dart, and not all males freeze: in the experiments approximately 40% of the females engaged in darting behavior, but only about 10% of males did so....The finding that a higher proportion of female rats dart may explain why previous studies have reported less freezing in females (e.g., Maren et al., 1994; Pryce et al., 1999)."

The paper "The Difference between Male and Female Rats in Terms of Freezing and Aversive Ultrasonic Vocalization in an Active Avoidance Test" tells us this: "We found that males were more likely to experience freezing (40%) than females (3.7%)."   Evidently male rats perform much differently than female rats in regard to freezing, but our neuroscientists very often fail to even specify which sex was used some experiment they did. 

When "freezing behavior" judgments are made, there are no standards in regard to how long a length of time an animal should be observed when recording a "freezing percentage"  (a percentage of time the animal was immobile). An experimenter can choose any length of time between 30 seconds and five minutes or more (even though it is senseless to assume rodents might "freeze in fear" for as long as a minute).  Neuroscience experiments typically fail to pre-register experimental methods, leaving experimenters to make analysis choices "on the fly." So you can imagine how things work. An experimenter might judge how much movement occurred during five minutes or ten minutes after a rodent was exposed to a fear stimulus. If a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 30 seconds, then 30 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 60 seconds, then 60 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over two minutes, then two minutes would be chosen as the interval to be used for a "freezing percentage" graph. And so on and so forth, up until five minutes or ten minutes. Such shenanigans drastically depart from good, honest, reliable experimental methods, and any researcher engaging in such shenanigans should be ashamed of himself. 

It should be crystal-clear by now: no one is reliably measuring fear or recall or memory in a paper relying on "freezing behavior" judgments, and in such a paper we should trust no claims made about fear or recall or memory in rodent subjects.

Sunday, September 17, 2023

Analysis of 23,810 Brain Scans Finds "Neither Structural Nor Functional Imaging Predicted Individual Psychology"

There is an appalling lack of quality in the vast majority of brain imaging studies trying to find correlations between brain characteristics and either mental activity or mental characteristics. The great majority of such studies use way-too-small study group sizes such as only 15 or 20. A press release from the University of Minnesota Twin Cities announced results which indicate that such small-sample correlation-seeking brain imaging experiments are utterly unreliable.  The headline of the press release is "Brain studies show thousands of participants are needed for accurate results." The abstract of the paper in the science journal Nature can be read here. The paper is entitled, "Reproducible brain-wide association studies require thousands of individuals." 

Recently an attempt was made to find correlations between brain imaging and psychological characteristics. Rather than following the usual nonsense of getting about 15 or 20 new subjects to have their brain scanned, the study took the much wiser approach of using brain scan data from 23,810 subjects already collected by the UK Biobank. I can't over-emphasize how important it is for correlation-seeking  experimental studies to use existing brain scan data whenever it exists and is sufficient.  When an experimental study does unnecessary fresh new brain scans of a few dozen subjects (particularly using the more powerful 7T scanners)  rather than using existing brain scan data already gathered in much greater quantities, this amounts to both poor science practice and also subjecting humans to needless risk without any medical justification. See my post here for the type of risks involved. 

The authors of a new study really "shook the trees" and "went the extra mile" looking for some correlation between brain states and psychological states. The study is entitled "The legibility of the imaged human brain." We read this:

"Across 23810 unique participants from UK Biobank, we systematically evaluate the predictability of 25 individual biological characteristics, from all available combinations of structural and functional neuroimaging data. Over 4526 GPU hours of computation, we train, optimize, and evaluate out-of-sample 700 individual predictive models, including multilayer perceptrons of demographic, psychological, serological, chronic morbidity, and functional connectivity characteristics, and both uni- and multi-modal 3D convolutional neural network models of macro- and micro-structural brain imaging."

When the brain scan data was collected by the UK Biobank, which occurred before this study began, the subjects also had tests done or interviews to determine psychological traits which have these names in that database: mood swings, miserableness, irritability, sensitivity, fed-up, nervous, anxious, tense, worry, lonely, guilty.  The authors of the new study made the most elaborate efforts to "slice and dice" the data, trying to find some correlation between brain states of their 23,000 subjects and psychological states of those subjects. 

Since they tried 700 different models, we are reminded of the old adage "keep torturing the data sufficiently, and it will confess to anything." But in this case there was no confession. The authors of the new study were unable to find any real correlation between brain states and mental states. A section of their paper referring to brain imaging (such as fMRI scans) is entitled "Psychological characteristics are poorly predicted by imaging." 

The authors found that they could predict age and sex well from brain imaging data, and also handedness (whether one is right-handed or left-handed). But it was a totally different story for psychological characteristics. We read this: "The addition of any neuroimaging, whether structural or functional, generally offered no material benefit" in being able to predict a psychological characteristic. The authors state this: "Our analysis shows that whereas constitutional characteristics—age, sex, and weight—are highly predictable from neuroimaging, psychology, chronic illness, and serological characteristics are not."  

 You could fairly summarize the results by saying this: brains don't look any different when someone is moody, miserable, irritable, sensitive, fed-up, nervous, anxious, tense, worrying, lonely or guilty. The result of the authors is consistent with the claim that the brain is not the source of the human mind. 

Monday, September 11, 2023

The 11 Biggest Neural Shortfalls Hinting Brains Don't Make Minds

There are quite a few ways in which the human brain suggests to us that scientists are dead wrong when they claim that the brain is the source of our minds and dead wrong when they claim the brain is the storage place of our memories. Below is a discussion of the eleven  biggest shortfalls of the brain suggesting that such claims are very much in error. 

Neural Shortfall #1:  The Lack of Any Stable Place in a Brain Where  Learned Information Could Be Stored for Decades

Humans can hold memories for 50 years or longer. If you are to explain such memories under the idea that brains store memories, there needs to be a place in the brain that can store memories for 50 years or longer. There is no such place in the brain. 

Here we have a gigantic failure of the most common idea of neural memory storage, the groundless claim that memories are stored in synapses. The proteins in synapses are very short-lived, having lifetimes of two weeks or shorter. Synapses do not last for years. Synapses are so small that it is all but impossible to track how long they last. But we know that synapses are connected to little bumps on dendrites, called dendritic spines, which are much larger than synapses. Scientists can track how dendritic spines change over time. Such observations have shown that dendritic spines are short-lived, usually lasting no more than a few weeks or months, and seemingly never lasting for years.

With the exception of the DNA inside the nucleus of neurons, neurons offer no place for stable information storage. In theory, you could store information for decades in the DNA in the nucleus of a neuron. But there is no sign that human learned information is ever written to DNA. When doctors extract some part of a brain to reduce seizures, they find neurons that all have the same information in their DNA: mere genetic information that is basically the same in every cell, including cells from your feet. The genetic code used by DNA is merely a code allowing for the representation of amino acids, not a code for allowing the representation of the things human learn in school.  

Neural Shortfall #2:  The Lack of Any Mechanism in the Brain for Writing Learned Information

We know how a computer stores information. There is a spinning hard disk, and what is called a read/write head.  The head is a tiny unit capable of writing to any point on the disk, and reading from any point on the disk. Both the disk and the read/write head can move, which allow the read/write head to be positioned over any point on the disk, so that data can be read from anywhere on the disk, or written to anywhere on the disk. 

If brains store memories, a brain would need to have some kind of mechanism for writing a new memory. But no such mechanism seems to exist. There is no kind of "read unit" or "write unit" that moves around from place to place on a brain to read and write. 

In this regard neuroscientists are empty-handed. Don't be fooled by claims that the artificial effect called long-term potentiation is some kind of mechanism for memory.  What is misleadingly called “long-term potentiation” or LTP is a not-very-long-lasting effect by which certain types of high-frequency stimulation (such as stimulation by electrodes) produces an increase in synaptic strength. Synapses are gaps between nerve cells, gaps which neurotransmitters can jump over. The evidence that LTP even occurs when people remember things is not very strong, and in 1999 a scientist stated (after decades of research on LTP) the following:

"[Scientists] have never been able to see it and actually correlate it with learning and memory. In other words, they've never been able to train an animal, look inside the brain, and see evidence that LTP occurred."

Since it does not last for years, and is almost always a very short-term effect, LTP cannot be some mechanism by which permanent memories are written to the brain. A recent scientific paper states this:

"It actually remains to be demonstrated that LTP = memory in most mammalian learning models. In fact, most studies of long-term plasticity do not explore beyond an hour or two; clearly not enough to establish a direct link with long-term memory formation."

Neural Shortfall #3:  The Lack of Any Mechanism in the Brain for Reading Learned Information

There is in the brain no sign of any read mechanism.  We can imagine what a read mechanism might look like in a brain if a brain stored memories. It might be some little unit that moved around from place to place in the brain. When such a read unit came to some place in the brain where data was to be read, the read unit might linger for a while until the data was read. Nothing like any such unit exists in the brain. The only thing that moves around in the brain is blood and chemicals that pass between synapses. There is no sign of anything like a read unit that moves around from one place to another in the brain to read information. There is no sign of anything like a scan mechanism in the brain. 

Neural Shortfall #4:  The Lack of Any Signal Transmission Mechanism in the Brain Fast Enough to Account for Instant Human Recall and Fast Thinking

In a post entitled "Authorities Spur Wrong Ideas About the Complexity of Proteins" on one of my other blogs, I discussed how search engines such as Google give you misleading ideas when you perform a simple search trying to find out how complex are the protein molecules inside our body.  Something very similar happens when you search for how fast brain signals travel.  If you type in a Google search phrase of "speed of brain signals" you get as the first line a sentence saying "The brain can send signals to the body at up to 100 meters per second."  But you will commit a huge error if you remember that speed as "the speed of brain signals." 

Such a speed is the fastest speed that signals can ever travel in a brain, when traveling across myelinated axons.  But it is not at all the average speed of signal transmission in the brain. Almost all signal transmission in the brain occurs through a very much slower process involving transmission through chemical synapses (by far the most common type of synapses in the brain). Transmission across chemical synapses is relatively slow. Each time a signal crosses a chemical synapse, there is a delay called a synaptic delay. This is a delay between about .5 millisecond to 1 millisecond. The problem is that the brain is filled with as many as 100 trillion synapses, with as many as a thousand synapses for each neuron. To move through a few inches of brain tissue, a signal would have to cross very many synapses, resulting in a very substantial delay. In my post here I explain why the cumulative delay of having to travel across many synapses means that the average speed of brain signals should be no greater than about 1 centimeter per second (about .4 inches per second), which is about the speed of a snail. 

A 2015 paper describes the speed of the transmission of various brain signals, without considering synaptic slowing factors. It lists these brain signal speeds:

"Interestingly, signal propagation speeds in various conditions are similar (~0.1 m/s). Neural spikes generated by 4-aminopyridine (4-AP) travel with a longi-tudinal speed of 0.09/0.03 m/s along the CA3 region (Kiblerand Durand, 2011), whereas in the presence of picrotoxin, synchronous firing events propagate longitudinally at 0.14 /0.04m/s (Miles et al., 1988). High K+-, low Mg2+-, and zero-Ca2+- triggered spikes again exhibit speeds of 0.07-0.1 m/s, 0.1– 0.15m/s, and 0.04 – 0.15 m/s, respectively (Haas and Jefferys, 1984;Quilichini et al., 2002;Liu et al., 2013). In normal tissue, theta oscillations travel with a speed of 0.08 – 0.107 m/s in the hip-pocampus of living rodent rats (Lubenov and Siapas, 2009),whereas carbachol-induced theta oscillations travels with a speed of 0.119 m/s along the CA1 cell layer and a 0.141 m/s along the CA3 cell layer (Cappaert et al., 2009). Together, it is clear that 0.1m/s is a common propagation speed regardless of experimental models...Other propagation mechanisms, such as extracellular ionic transients and axonal conduction mechanisms, have very different propagation speeds (0.0004 – 0.008 m/s for K+ diffusion and 0.3– 0.5 m/s for axonal conduction, Miles et al., 1988;Lian et al., 2001;Francis et al., 2003;Meeks and Mennerick, 2007;Jensen, 2008;Kibler et al., 2012)."

Given such numbers, my estimate above for an average brain signal speed transmission of only about 1 centimeter per second seems roughly correct.  Some of the speeds quoted above are about five to ten times faster than my estimate, but the estimate for K+ diffusion  (potassium diffusion) is much slower than my estimate. Most of the numbers quoted above are about 1000 times slower than the incorrect "100 meters per second" figure often given for brain signal speed.  The speeds quoted above are "fastest time" quotes that don't consider all the slowing produced by cumulative synaptic delays and other slowing factors.  

The 2021 paper "Brain Activity Fluctuations Propagate as Waves Traversing the Cortical Hierarchy" mentions an average speed of about 10 millimeters per second, which is very close to the rough estimate I give above of a brain signal speed of about 1 centimeter per second (10 millimeters per second is the same as 1 centimeter per second). We read this: "The top-down propagations (N = 8519) and bottom-up propagations (N = 18 114) were found to account for 9.08% and 19.7% of the total scanning time, respectively, with an average speed of 13.45 ± 7.78 and 13.74 ± 7.51 mm/s (mean ± SD), respectively." 

We should very carefully note that the average speed of signals in the brain is way too slow to account for blazing-fast human thinking and instant human recall. See my post here for examples of exceptionally fast thinkers who calculated so fast we could never explain their results by imagining brain signals traveling at sluggish speeds such as 1 centimeter per second. 

Neural Shortfall #5:  The Lack of Any Encoding System by Which Brains Could Translate Learned Information Into Synapse States or Neural States

Just as there is pretty much no such thing as “just storing” information on a computer, without using some type of encoding, there could be no such thing as “just storing” a memory in the brain. For a brain to store memories, it would have to do some encoding by which sensory information or thought information was converted into neural states or synapse states. Neuroscientists love to keep using the word "encoding" when referring to memory, such as saying that memory formation begins with encoding. But neuroscientist have  not the slightest understanding of how a brain could encode the type of things that humans experience and learn. 

When words are stored on a computer or smartphone, multiple levels of encoding are involved. First, there is the alphabetic encoding. Then there is what is called an ASCII encoding. The individual letters are converted to numbers, using an ASCII table that signifies how particular letters will be stored as particular numbers (an example of an ASCII table is below). Then there is a binary encoding by which numbers such as 34 written in the base 10 system are converted to a series of 1's and 0's such as 100101101.

ASCII is an encoding protocol

 We see in the diagram below three different types of encoding going on when the word “cat” is stored on a computer.


When an image is stored on a computer, there are also multiple types of encoding going on. The image is broken down into a grid of pixels; each pixel is translated into a number indicating a particular shade of color; then those numbers are translated into binary form that is ideal for storing on a computer hard drive.

image to binary
Encoding of a visual image

Now let us consider how the brain might store memories. If we are to imagine that the brain stores memories, we must imagine that the brain somehow does encoding to get the memory stored to the brain. But for a brain to store everything humans learn and experience, it would seem to need not just one encoding protocol, but many different encoding protocols.  This is because of the vast variety of things humans can learn and experience, which range from auditory to visual to purely conceptual, and which range from experiential (involving self-experience) to accounts of other people's experience. One very big problem is that if this magnificent suite of encoding protocols existed, it would be a "miracle of design" that would vastly worsen the problem of explaining a natural origin of humans. 

But the biggest problem is that there simply is no sign of any such protocols being used anywhere in the brain. Scientists are unable to discover the slightest trace of any such protocols in the brain. Could it be that microscopic encoding needed to encode memories in brain would just be too hard to discover? No, that idea does not hold water. Scientists were able in the time around 1948 to 1960 to discover a microscopic encoding protocol used in the human body: the genetic code, by which triple combinations of nucleotide base pairs stand for amino acids.  If the brain was doing multiple types of encoding by which human experiences and human learned knowledge were converted into brain states or synapse states, that would have left a "footprint" that scientists would have been able to discover half a century ago. Now the microscope technology of scientists is vastly greater than the technology that was able to discover the genetic code about 1950. But still neuroscientists have found no signs of any code by which human experiences and human learned knowledge could be translated into neural states. 

Neural Shortfall #6:  The Lack of Any Region of the Brain With Activity Strongly Correlating With Thinking, Memory Formation or Recall

A recent story in the Sunday Times of London is a showcase for a variety of objectionable claims and behavior by today's neuroscientists. The story (which I won't link to because it's behind a paywall) is entitled "The brain scientist who wants to zap away your depression."  We have again the appalling practice of subjecting a journalist to some medically unnecessary brain treatment, presumably so that the journalist can act like some "embedded journalist" who writes favorable stories about those he has been entangled with.   Here is a sickening quote about some brain zapping that is not part of a funded scientific experiment and has no medical value for the subject:

" 'This might a little bit uncomfortable,' she says, pressing the button. Tap, tap, tap, tap. Repeated jolts fire into my brain. Pause. Tap, tap, tap, tap. My face screws up as the muscles in my cheek spasm. It does not hurt but it certainly is not pleasant."

We have a long plug for a book written by a neuroscientist telling us the nonsensical idea that everything is measurable. The neuroscientist makes the ludicrous claim, "I don't think depression is a real natural phenomenon." She also makes the very misleading insinuation that we've got mental health wrong because people aren't emphasizing brains, a claim contrary to the truth that chemical and neuroscience approaches have been the main approach taken for decades, with neuroscience failing to deliver for psychiatry. As part of the article we have one of those "function localization" charts claiming that particular parts of the brain have particular cognitive functions.

There is no robust evidence for such claims. The lack of any region of the brain with activity strongly correlating with thinking, memory formation or recall is one of many reasons for rejecting the claim that the brain is the source of thinking and the storage place of memories. For posts discussing this lack, see my post "The Brain Shows No Sign of Working Harder During Thinking or Recall,"  my post "Studies Debunk Hippocampus Memory Myths," and my post "Reasons for Doubting Thought Comes from the Frontal Lobes or Prefrontal Cortex." For a discussion of the "lying with colors" visuals that neuroscientists repeatedly use to create misleading impressions of much greater activity in brain regions without significantly greater activity during particular activities, see my post "Neuroscientists Keep Using Misleading Coloring in Brain Visuals." 

Neural Shortfall #7:  Very High Brain Signal Noise, and the Lack of Any Capability in the Brain for Reliably Transmitting Signals Throughout Brain Tissue

A 2020 paper states this:

"Neurons communicate primarily through chemical synapses, and that communication is critical for proper brain function. However, chemical synaptic transmission appears unreliable: for most synapses, when an action potential arrives at an axon terminal, about half the time, no neurotransmitter is released and so no communication happens... Furthermore, when neurotransmitter is released at an individual synaptic release site, the size of the local postsynaptic membrane conductance change is also variable. Given the importance of synapses, the energetic cost of generating action potentials, and the evolutionary timescales over which the brain has been optimized, the high level of synaptic noise seems surprising."

Such a result (a very serious brain physical shortfall) is surprising only to those who believe that your brain stores your memories and that your brain makes your mind.  Those who disbelieve such a thing may expect exactly such shortfalls to be repeatedly found. 

The quote above states one of the most important shortfalls of the human brain: that chemical synapses (by far the most common type) do not reliably transmit signals, and transmit signals with an accuracy of 50% or less.  This is a fact with the most gigantic consequences. Given that brain signals have to travel across many different synapses, it should be impossible for humans to recite any long bodies of textual  information with even 50% accuracy. But instead of having such dismal recall abilities, humans again and again show the ability to perfectly recite very large bodies of memorized information. This occurs every time an actor recalls all of the 1422 lines in the role of Hamlet, and every time a Wagnerian opera singer correctly sings all of the words and notes of the very long roles of Siegfried or Hans Sachs. It occurs even more dramatically when certain Moslem memory marvels correctly recall all of the 6000+ lines in their holy book.  For many similar cases, see my post "Exceptional Memories Strengthen the Case Against Neural Memory Storage." 

Below is an example of the type of result in a high-noise system that does not reliably transmit information.

signal noise

Neural Shortfall #8:  The Lack of Any Token Repetition in the Brain Other Than the Nucleotide Base Pair Tokens Capable of Representing Only Low-Level Chemical Information Such as Sequences of Amino Acids

Even when there is an encoding system that has not yet been deciphered or figured out, there will be indications that such an encoding system exists. The indications are repeated tokens. Before Europeans were able to figure out how hieroglyphics worked, they knew that hieroglyphics used some type of encoding system, because they could see how frequently there was a repetition of individual tokens or symbols. Not counting the genetic information stored in neurons and many other types of cells, where we do see an enormous amount of token repetition, there is no sign of any token repetition in the brain. This is a clear sign that the brain does not use any system of encoding to store human memories.  And if there is no such system of encoding, we should not think that brains store memories; for some system of encoding would be required to convert human experiences and human learned knowledge into brain states. 

Neural Shortfall #9: The Lack of Any Discovered Stored Memories in Extracted Brain Tissue or Dissected Corpses

 Within cells below the neck, there is stored information in DNA using a system of representations called the genetic code. Scientists were able to discover such a system in the middle of the 20th century, and were able to start recovering information stored using such a system, which was all low-level chemical information such as which amino acids make up particular proteins. If the human brain stored memories, there would be brain areas storing memories that could be studied microscopically to find what a person had learned. You would then be able to find out some of the things someone had learned or experienced by studying his brain. No such memories have ever been discovered by microscopically examining brain tissue, even though scientists today have vastly been microscopes than they had in the 1950's. 

Neural Shortfall #10: The Lack of Any Mechanism Allowing a Very Rapid Transformation of Brain Tissue or Synapses Sufficient to Account for Instant Memory Formation

It is a fact of human experience that humans can form permanent new memories instantly. A successful theory of a brain storage of memories would somehow have to account for this fact, by explaining how permanent traces of an experience could instantly appear. There is no theory of a brain storage of memory that does such a thing.  Theories of a brain storage of memory (which are little more than  hand-waving and catchphrases) typically appeal to protein synthesis as a key part of memory formation. But protein synthesis requires quite a few minutes.  The shortfall of neuroscientists in this regard is so great that they typically end up making  ridiculous claims (contrary to all human experience) that it takes minutes for a person to form a new memory. 

Neural Shortfall #11: The Lack of Any Indexing System or Coordinate System or Addressing System That Could Help to Explain Instant Memory Recall

It is a fact of human experience that humans recall correct answers instantly. If you ask me what was the biggest conquest of Julius Caesar, I don't have to take minutes or hours waiting for my brain to search for the answer. Instead, the instant you finish asking the question, I will give the correct answer: the Gauls in France. How can humans answer questions so quickly? We know that indexing can make possible very fast retrieval of information such as occurs in database systems.  But the brain has no sign of having any indexing system. An indexing system can only occur if there exists an addressing system or a position notation system (for example, a book has the position notation system called page numbering, and the index at the back of the book leverages that position notation system).  Brains have neither an indexing system nor an addressing system nor the position notation system that is a prerequisite for an indexing system. There are no neuron numbers and no neural coordinate system that a brain might use to allow fast, indexed retrieval of a memory.  So there's no way to explain how instant recall could occur if you are retrieving memories from your brain. 

We don't think and recall at the sluggish speed of brains; we think and recall at the speed of souls.