Wednesday, April 29, 2026

Getting Billions, They Boasted They Would Get by 2025 a "Comprehensive Mechanistic Understanding of Mental Function"

 In recent years the two largest brain research projects have been a big US project launched in 2013 called the BRAIN Initiative, and a big European Union project launched in 2013 called the Human Brain Project. The BRAIN Initiative has received billions in funding, but has failed to fulfill the boasts it made about what it would do by the year 2025. 

More than seven years ago, the leaders of the BRAIN Initiative produced a document filled with hubris, one boasting about the grand and glorious things the project would achieve by the year 2025. The document was called Brain 2025: A Scientific Vision, and was offered at one of the project's two main web sites. You can read the document at the link here, and after going to that page you need to press on a + button (next to "Expand accordion content") to get the whole text. 

The “scientific vision” laid out in the document is largely an ideological vision, based on the unbelievable idea that the human mind is merely the product of the brain. The dubious ideology of the authors is made clear in the very first sentence of the document, in which the authors state, “The human brain is the source of our thoughts, emotions, perceptions, actions, and memories; it confers on us the abilities that make us human, while simultaneously making each of us unique.” It has certainly not been proven that any brain has ever generated a thought or stored a memory.

In fact, later in the document the authors confess, “We do not yet have a systematic theory of how information is encoded in the chemical and electrical activity of neurons, how it is fused to determine behavior on short time scales, and how it is used to adapt, refine, and learn behaviors on longer time scales.” This is certainly true. No one has anything like a systematic theory of how a brain could store memories as neural states, nor has anyone come up with anything like a systematic theory of how a brain could generate a thought. So why, then, did the document start out by stating that “the human brain is the source of our thoughts, emotions, perceptions, actions, and memories”? No one has any business making such a claim unless he first has “ a systematic theory of how information is encoded in the chemical and electrical activity of neurons.” But the document admits that no such theory exists.

But despite this one candid confession, the document was a writing of enormous hubris. The authors boasted that by the year 2025, the BRAIN Initiative would figure out how minds work. The document stated, "The most important outcome of the BRAIN Initiative will be a comprehensive, mechanistic understanding of mental function that emerges from synergistic application of the new technologies and conceptual structures developed under the BRAIN Initiative.” Notice how enormous is the predictive conceit of that statement, which sounds like a delusion of grandeur. The authors did not merely claim that they would "shed light" on how minds work, or that they would "get clues" as to how minds work. They boasted that their project would produce a  "comprehensive, mechanistic understanding of mental function." Making a boast as big as the sky. the authors predicted that their project would tell us how brains produce minds and their phenomena. 

What has been the result of the BRAIN Initiative? No great breakthroughs have occurred. The results (to use English slang) are "peanuts" or "chickenfeed." 

BRAIN Initiative

On the page here , we get a summary of the BRAIN Initiative's achievements in 2024.  None of it sounds like an achievement relevant to whether brains make minds, except for the claim that there was developed a " brain-computer interface that can convert brain waves into speech with minimal training."  We have a link to the page here, which makes the same claim. The claim is unfounded. The pages are referring to the study "Representation of internal speech by single neurons in human supramarginal gyrus." My post here explains why the study is not actually a demonstration of a "
brain-computer interface that can convert brain waves into speech." 

What's going on in the study is a reading of brain waves during very rapid switching between "speak it" instructions and "think it" instructions, with no care being taken to prevent subjects from speaking during the very short "think it" periods lasting only a few seconds.  We should assume that during many of the claimed "internal speech" intervals there were actually "audible speech" events, because of a failure of subjects to follow a very hard-to-perform protocol, one seemingly designed to produce such "failure to follow instructions" events.  Under such an assumption, the results can easily be explained, without assuming that there was occurring "converting of brain waves into speech." The second of the BRAIN Initiative pages given above boasts that "For one of the two participants, the BCI [brain-computer interface] could decode several words of their inner dialogue with 79% accuracy during an online task." These are meager tiny-sample-size results easily explained by chance or by assuming a difference in muscle movements that produce different types of brain waves, with supposed "inner dialogue" events often being verbal speaking events, as users failed to follow perfectly the hard-to-follow instructions involving rapid switching between speech and pure thinking. 

On the same BRAIN Initiative page we have another similar boast of a "brain-computer interface."  It is a reference to a paper which makes no claim  of picking up an "inner dialogue" involving no muscle movements. Instead some patient with a speech problem had electrodes implanted in his brain, and some system is picking up his attempts to speak different words. Such an attempt can produce limited success mainly because different types of speech efforts (involving slightly different muscle movements) may produce different types of EEG waves. Muscle movements or attempted muscle movements show up very noticeably in EEG brain wave readings; and distinctive types of muscle movements corresponding to particular speech sounds (phonemes) may produce distinctive blips in EEG readings. 

Studies like this (hailed as examples of "mind reading" from brains) typically involve a variety of shady tricks, such as getting inputs from more than just brain wave inputs, such as inputs from eye tracking devices, which make it easy to determine what word or picture on a screen someone is focusing on. 

The reality is that the BRAIN Initiative has failed to produce any results backing up in any weighty way claims that brains make minds and that brains store memories. You cannot actually detect what someone is thinking from analyzing mere brain waves. Studies claiming to do such a thing typically involve various types of dubious methodology and objectionable techniques.  A well-designed, fairly conducted and well-analyzed study will always show a failure to detect from brain waves alone what someone is thinking.

These were additional unfulfilled boasts of the document entitled Brain 2025: A Scientific Vision:
  • "We expect to discover new forms of neural coding as exciting as the discovery of place cells, and new forms of neural dynamics that underlie neural computations." So-called "place cells" were never actually discovered. The claim that they were discovered is one of the many groundless triumphal legends of neuroscientists, who have a high tendency to repeat "old wives' tales" of the belief community they belong to. Read my post here for a debunking of claims that "place cells" were ever observed  All that happened was that scientists observed some cells, and claimed that some cells were more active when some rats were in certain places. The studies were never examples of robust science, because they were guilty of various methodological sins such as using way-too-small study group sizes. No actual new form of neural coding was ever discovered by the BRAIN Initiative or any other scientific project or scientific study. And no one ever discovered "neural dynamics that underlie neural computations."
  • "Through deepened knowledge of how our brains actually work, we will understand ourselves differently, treat disease more incisively, educate our children more effectively, practice law and governance with greater insight, and develop more understanding of others whose brains have been molded in different circumstances." No such bonanza of benefits resulted from the BRAIN Initiative. Neuroscience has done nothing to improve the education of children, and done nothing to improve the practice of law or government.
  • "We must understand how circuits give rise to dynamic and integrated patterns of brain activity, and how these activity patterns contribute to normal and abnormal brain functions. Our expectation is that this approach will answer the deepest, most difficult questions about how the brain works, providing the basis for optimizing the health of the normal brain and inspiring novel therapies for brain disorders." The BRAIN Initiative wasted billions floundering around in this dead end, but did not answer any of the "most difficult questions about how the brain works," or any of the "most difficult questions about how the mind works."  Scientists still have no credible tale to tell of how a brain could think, imagine, instantly learn or instantly recall. 
  • " We expect The BRAIN Initiative® to develop new biological reagents, possibly including genetically-modified strains of rodents, fish, invertebrates, and non-human primates; recombinant viruses targeted to different brain cell types in different species; genetically-encoded indicators of different forms of neural activity; and genetic tools for modulating neuronal activity."  Here the scientists (sounding like eugenics enthusiasts) fall into Frankenstein folly by boasting about how they will monkey with the genes of various organisms, including rodents and primates. The hubris involved here should provoke the gravest concerns.  And anyone familiar with the very substantive suspicions that the COVID-19 virus might have arose from a lab leak should shudder at the proposed gene fiddling. 

Saturday, April 25, 2026

She Had Above-Average Intelligence With Only About 15% of Her Brain

 The failure of neuroscientists to adequately study minds is a very severe failure. You can get a PhD in neuroscience while making only a perfunctory study of human minds.  An examination of the courses required to get a Master's Degree in neuroscience will typically show that only one or two courses in psychology are required. Doing a neuroscience PhD dissertation typically involves some highly specialized research on some very narrow topic, research that does not require much in the way of additional study of human minds and the mental capabilities and mental experiences of humans.  The topic of human minds and human mental experiences is a topic of oceanic depth, requiring years of deep study for someone to get a good grasp of the full range of human mental states, human mental capabilities and human mental experiences. Very strangely, a typical neuroscientist is someone who will feel qualified to pontificate about what causes mental experiences, mental states and mental capabilities, even though he typically has done little to very deeply study mental experiences, mental states and mental capabilities.

Ask a neuroscientist to describe the best examples of high capacity and high accuracy in human memory recall, and you will be likely to get a shrug of the shoulders, or an answer that is wrong.  Ask a neuroscientist to describe the best examples of human performance in tests of extrasensory perception (ESP), and you will be likely to get a shrug of the shoulders, or an answer that is wrong. Ask a neuroscientist to describe the best examples of humans learning or memorizing things very quickly, and you will be likely to get an answer showing no study of such a topic. Ask a neuroscientist to describe the fastest examples of human calculation involving no use of any objects such as pencil, paper or blackboards, and you will likely get an answer that fails to describe the most impressive cases. 

Rather amazingly, it also seems true that most or very many neuroscientists are not very deep and very thorough scholars of the topic of human brains. A typical neuroscientist may be able to tell you in very great detail about some particular aspects of human brains, and may be able to tell you in the greatest detail about how to use some machine that is used to study brains. But the same neuroscientist may have failed to properly study the topic of human brains in a way that involves learning about every relevant thing you could about human brains. Ask that neuroscientist to tell you what happens when you remove half of a human brain, and you may get an answer that is wrong. Ask that neuroscientist to tell you how reliably chemical synapses transmit nerve signals (action potentials), and you may get an answer that is wrong. Ask that neuroscientist to tell you how quickly a brain electrically shuts down when the heart stops (reaching a state called asystole), and you may get an answer that is wrong. Ask that neuroscientist how quickly the average brain signal travels, and you will typically get an answer that is wrong, an answer failing to take into account all of the relevant factors such as the very strong slowing factor of cumulative synaptic delays, and the very strong slowing factor of the relatively slow transmission speed of dendrites

Part of the job of properly studying brains is to study very thoroughly all of the most impressive cases of high mental performance despite very high brain damage. Relatively few neuroscientists show signs of having studied such a topic. In order to properly study such a topic, you must study unusual medical case histories.  Very many of the most important and relevant medical case histories are recorded in books, newspapers and magazines. But can you ever recall reading of a neuroscientist searching newspapers for unusual case histories in neuroscience? 

Luckily there are some web sites that contain very many of the most relevant examples of such medical case histories that are relevant to the question of whether the human mind is the source of the mind and whether the human mind is the storage place of human memories. One of those sites is the very site you are reading.  In my series of posts labeled "High Mental Function Despite Large Brain Damage," which you can read here, I describe many of the most important case histories that are  relevant to the question of whether the human mind is the source of the mind. Now let me provide some more such cases. 

The first case involves a case of hydrocephalus. Hydrocephalus is a disease in which a brain has excessive watery fluid. In cases of hydrocephalus a brain may end up in a state that is mostly watery fluid. The brain scan of someone with severe hydrocephalus might look something like the schematic visual below. The black part in the middle is a watery fluid that has basically no neurons. 


The case of Sharon Parker is described in a 2003 news story entitled "Success of Nurse Who Lost Most of Her Brain." You can read the story here. We read this:

"When she was a baby, Sharon Parker's parents were told a rare and incurable condition meant she would not reach her fifth birthday.

She was left with only 15 per cent of her brain and there was little hope she could lead a normal life. But she defied the experts to become an astonishing success story.

Now 39, Mrs Parker is a nurse with a high IQ who is happily married with three children....She was diagnosed with congenital hydrocephalus - water on the brain - when she was nine months old. Doctors drained the liquid from her skull with a tube but her brain mass had been compacted in the outer edges of her skull, leaving a gaping hole in the middle....As a 16-year-old, she passed eight O-Levels and her IQ was later found to be 113, putting her in the top 20 per cent of the population.

The hydrocephalus has left her with a below-average short-term memory so she carries a notebook to remind herself to do things. However, tests have found that her long-term memory is better than average.

After leaving school, Mrs Parker decided to become a nurse and soon after starting her training, she met her future husband David, a builder who is now 45. The couple were married three years later and have three children...

She often participates in studies, including one recently in Ohio when she was examined by one of the world's leading experts on brain mass. Graham Teesdale, Professor of Neurosurgery at the University of Glasgow, said she demonstrated how adaptable the brain can be even when it is incomplete. 'She shows how the brain has an immense capacity to cope and adapt,' he said. 'Some people with the same acute problem experience problems in thought processes but others are able to function totally normally.' "

A materialist who believes that the brain is the source of the mind may wince after reading about this case history. But there is another hydrocephalus case that may be even better as evidence that brains do not make minds. Coincidentally, this case also involves someone named Parker, but someone other than Sharon Parker: a male with almost no brain. 

We read about the case on the page here

"Parker was born on September 9, 2008, with hydrocephalus, or excess fluid on the brain. Parker’s parents received the diagnosis at 20 weeks in the pregnancy that there was a blockage between the third and fourth ventricles of Parker’s brain, which was preventing the cerebrospinal fluid from draining into the body. As a result, the fluid would build up and compress Parker’s brain matter against his skull, making it almost non-existent, threatening to severely hinder Parker’s early neurological development. At birth, the average baby has 90–95% brain matter and 5–10% fluid within the cranial cavity; Parker had over 98% fluid and less than 2% brain matter, amounting to a mere 8 millimeters of brain matter at birth."

Later on the same page we read this

"He attends a special-needs kindergarten class, where he continues to thrive and demonstrate an inexplicable intellect and remarkable social skills.

Parker is truly a miracle – the child who once was thought may never walk or talk now plays, dances, sings, never stops talking (having never met a stranger) and hopes to one day become a sportscaster. Parker has far exceeded every expectation of his doctors and also adds being named the 2015 Ace All-Star to his long list of remarkable achievements."

Tuesday, April 21, 2026

The "Research Dystopia" of Dogma-Driven Neuroscience Experimentation

A dystopia is a fictional world in which things have gone horribly wrong. You might use the term "research dystopia" to describe certain fields of scientific research in which researchers are dedicated to proving untrue or implausible dogmas, by the use of poor methods of experimentation or analysis. Such a research dystopia is largely a world of fiction, in which false or implausible claims keep being repeated. In such a research dystopia, things have gone horribly wrong, because there is a predomination of poor techniques of scientific experimentation and scientific analysis.

Sadly, the field of research known as cognitive neuroscience research is a field you could call a research dystopia, without being too far off the mark. Such a field is a largely a world of fiction, in which researchers keep making untrue claims about brains being the source of minds and brains being the storage place of memories. And it is a world in which things have gone horribly wrong, because researchers keep churning out miserably designed studies guilty of various types of Questionable Research Practices. 

The latest evidence that cognitive neuroscience research is a research dystopia can be found in a press release on the clickbait-heavy site earth.com, and in the scientific paper that press release is promoting. The press release has the very untrue title "Scientists can now 'edit' brain circuits to enhance memory."  We read this very false claim: "New research shows that trimming specific synapses in a mouse brain circuit can strengthen memories and help them last longer." We read about some weird experiment in which scientists fiddled with synapses in the brains of a few mice. 

Making the untrue claim that a standard measure of memory was used (a claim that is untrue for reasons I will soon explain), the press release says this:

"Mice with edited hippocampal circuits froze more during recall tests, a standard memory measure.  With mild training, that advantage appeared two days after learning and remained 23 days later, strengthening both recent and long-term memory. With more intense training, the treated group held steady while controls faded, so the difference was not just a lucky one-off."

To help create the illusion that some reliable research was done, we have no mention of the number of mice used in the experiment. A look at the scientific paper being discussed by the press release gives us the answer to that question. The scientific paper is the very low-quality science paper here, one entitled "Remodeling synaptic connections via engineered neuron-astrocyte interactions."  In the scientific paper we read that the number of mice being tested was ridiculously low. The study group sizes were way-too-small study group sizes such as only 3 mice or only 6 mice. No study of this type should be taken seriously unless the study group sizes were at least 15 or 20 animals per study group. You do not have any decent evidence of a real effect if you merely use study group sizes of 6 animals per study group in a study comparing performance between altered mice and unaltered mice. It is way, way too easy to get a false alarm using a study group size so small. 

Below is a graph from the paper, found in Figure 8 of the paper:


This is what the paper is offering as its main evidence for a change in memory performance produced by the brain fiddling that the experimenters did. Each of the dots represents the claimed "freezing behavior" of one mouse in only one trial. By counting the number of circles, we can see that the study group sizes were only 6. 

The paper "Prevalence of Mixed-methods Sampling Designs in Social Science Research" has a Table 2 giving recommendations for minimum study group sizes for different types of research. The minimum number of subjects for an experimental study is 21 subjects per study group. 

minimum sample sizes

We simply cannot take seriously any study of this type using such a way-too-small study group of only six mice per study group. Using a study group size that small, it is way, way too easy to get a false alarm result, purely by chance. Similarly, if I do test of the effectiveness of rubbing a lucky rabbit foot charm in two groups of six people, and one of the groups report having better luck on the few days of the test, that is no decent evidence for the effectiveness of rubbing a rabbit's foot charm to increase luck. It is way, way too easy to get such a result from pure chance. 

Another reason why the reported result is worthless as evidence is that it used the utterly unreliable technique of trying to judge memory performance by judging the "freezing behavior" of mice. That technique is not a reliable technique for judging fear or recall in rodents, for reasons explained at length in my post here. 

The press release promoting this very low-quality paper makes the claim that a "a standard memory measure" was used. That is not correct. Although very often used in the dysfunctional world of rodent neuroscience research, the technique of attempting to measure "freezing behavior" in rodents is actually a technique that involves no standard measurement technique. The long appendix at the end of this post documents the utter lack of standards when such "freezing behavior" estimations occur. And when "freezing behavior" estimations occurs, it is not even memory that is being measured. What is being measured is what percent of some time interval a rodent is not moving. 

Neuroscientists love the technique of "freezing behavior" estimations, because it is a "see whatever you are hoping to see" type of technique, in which the desired positive result can almost always be claimed, by fiddling around with how the "freezing behavior" estimation occurs. The lack of any real standard in such estimations is only part of the reason why "freezing behavior" estimations are an utterly reliable technique for measuring fear or recall in rodents. 



We have in the very poor-quality paper "Remodeling synaptic connections via engineered neuron-astrocyte interactions" no decent evidence that manipulating synaptic connections has any effect on memory. The experimenters have used a study group size so low that it would not be good evidence of a memory change even if a reliable technique had been used to measure recall. And no such reliable technique for measuring recall has been used, but only the worthless  unreliable technique of attempting to judge "freezing behavior."  The authors might have discovered how way-too-small their study group sizes were if they had done a sample size calculation. But they make no mention of doing such a calculation.

Below are some quotes mentioning the use of too-small study group sizes and too-low statistical power in neuroscience studies. All references to underpowered studies are references to studies using too-small study group sizes. 

  • "Postmortem studies need n = 26 subjects to detect the same effect 80 % of the time, while MRI studies need n = 84 subjects; thus, most individual MRI studies and both postmortem studies were underpowered." (Link)
  • "The median neuroimaging study sample size is about 25...Reproducible brain-wide association studies require thousands of individuals." (Link)
  • "Critical appraisal indicated that studies were underpowered, did not match cases with controls and failed to account for confounding factors." (Link)
  • "Power calculations suggested that studies were underpowered." (Link)
  • "The small sample sizes of the current literature make it very likely that studies were underpowered, resulting in a host of issues such as imprecise association estimates, imprecise estimated effect sizes, low reproducibility, and reduced chances of detecting a true effect or, conversely, that 'detected' effects are indeed true." (Link)
  • "Most validation studies were underpowered and hence may have given a misleading impression of accuracy."  (Link)
  • "We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience." (Link)
The study here concludes, "Our results indicate that the median statistical power in neuroscience is 21%." This is an abysmal number, an appalling figure. It has long been said that in experimental research, the goal should be a statistical power of 80%, which roughly corresponds to a likelihood of 80% that the result will be replicated.  A study with a statistical power of 21% is a low quality study that is likely to be announcing a false alarm. When a research field has a median statistical power of 21%, that means half of the studies have a statistical power of 21% or less.  If such an estimation is correct, it means the great majority of neuroscience studies report results that are unreliable or untrue. 

The combination of very bad research practices and the enormous bias of researchers eagerly trying to prove old, untenable dogmas about brains makes the field of neuroscience experimentation something you might call a research dystopia,  a kind of experimental wasteland. 

Appendix:The Lack of Any Standards in "Freezing Behavior" Estimations

 A paper describing variations in how "freezing behavior" is judged reveals that no standard is being followed. The paper is entitled "Systematic Review and Methodological Considerations for the Use of Single Prolonged Stress and Fear Extinction Retention in Rodents." The paper has the section below telling us that statistical techniques to judge "freezing behavior" in rodents are "all over the map," with no standard statistical method being used:

"For example, studies using cued fear extinction retention testing with 10 cue presentations reported a variety of statistical methods to evaluate freezing during extinction retention. Within the studies evaluated, approaches have included the evaluation of freezing in individual trials, blocks of 2–4 trials, and subsets of trials separated across early and late phases of extinction retention. For example, a repeated measures analysis of variance (RMANOVA) of baseline and all 10 individual trials was used in Chen et al. (2018), while a RMANOVA was applied on 10 individual trials, without including baseline freezing, in Harada et al. (2008). Patterns of trial blocking have also been used for cued extinction retention testing across 10 trials, including blocks of 2 and 4 trials (Keller et al., 2015a). Comparisons within and across an early and late phase of testing have also been used, reflecting the secondary extinction process that occurs during extinction retention as animals are repeatedly re-exposed to the conditioned cue across the extinction retention trials. For example, an RMANOVA on trials separated into an early phase (first 5 trials) and late phase (last 5 trials) was used in Chen et al. (2018) and Chaby et al. (2019). Similarly, trials were averaged within an early and late phase and measured with separate ANOVAs (George et al., 2015). Knox et al. (2012a,b) also averaged trials within an early and late phase and compared across phases using a two factors design.

Baseline freezing, prior to the first extinction retention cue presentation, has been analyzed separately and can be increased by SPS (George et al., 2015) or not affected (Knox et al., 2012bKeller et al., 2015a). To account for potential individual differences in baseline freezing, researchers have calculated extinction indexes by subtracting baseline freezing from the average percent freezing across 10 cued extinction retention trials (Knox et al., 2012b). In humans, extinction retention indexes have been used to account for individual differences in the strength of the fear association acquired during cued fear conditioning (Milad et al., 20072009Rabinak et al., 2014McLaughlin et al., 2015) and the strength of cued extinction learning (Rabinak et al., 2014).

In contrast with the cued fear conditioning studies evaluated, some studies using contextual fear conditioning used repeated days of extinction training to assess retention across multiple exposures. In these studies, freezing was averaged within each day and analyzed with a RMANOVA or two-way ANOVA across days (Yamamoto et al., 2008Matsumoto et al., 2013Kataoka et al., 2018). Representative values for a trial day are generated using variable methodologies: the percentage of time generated using sampling over time with categorically handscoring of freezing (Kohda et al., 2007), percentage of time yielded by a continuous automated software (Harada et al., 2008), or total seconds spent freezing (Imanaka et al., 2006Iwamoto et al., 2007). Variability in data processing, trial blocking, and statistical analysis complicate meta-analysis efforts, such that it is challenging to effectively compare results of studies and generate effects size estimates despite similar methodologies."

As far as the techniques that are used to judge so-called "freezing behavior" in rodents, the techniques are "all over the map," with the widest variation between researchers. The paper tells us this:

"Another source of variability is the method for the detection of behavior during the trials (detailed in Table 1). Freezing behavior is quantified as a proxy for fear using manual scoring (36% of studies; 12/33), automated software (48% of studies; 16/33), or not specified in 5 studies (15%). Operational definitions of freezing were variable and provided in only 67% of studies (22/33), but were often explained as complete immobility except for movement necessary for respiration. Variability in freezing measurements, from the same experimental conditions, can derive from differential detection methods. For example, continuous vs. time sampling measurements, variation between scoring software, the operational definition of freezing, and the use of exclusion criteria (considerations detailed in section Recommendations for Freezing Detection and Data Analysis). Overall, 33% of studies did not state whether the freezing analysis was continuous or used a time sampling approach (11/33). Of those that did specify, 55% used continuous analysis and 45% used time sampling (12/33 and 10/33, respectively). Several software packages were used across the 33 studies evaluated: Anymaze (25%), Freezescan (14%), Dr. Rat Rodent's Behavior System (7%), Packwin 2.0 (4%), Freezeframe (4%), and Video Freeze (4%). Software packages vary in the level of validation for the detection of freezing and the number and role of automated vs. user-determined thresholds to define freezing. These features result in differential relationships between software vs. manually coded freezing behavior (Haines and Chuang, 1993Marchand et al., 2003Anagnostaras et al., 2010). Despite the high variability that can derive from software thresholds (Luyten et al., 2014), threshold settings are only occasionally reported (for example in fear conditioning following SPS). There are other software features that can also affect the concordance between freezing measure detected manually or using software, including whether background subtraction is used (Marchand et al., 2003) and the quality of the video recording (frames per second, lighting, background contrast, camera resolution, etc.; Pham et al., 2009), which were also rarely reported. These variables can be disseminated through published protocols, supplementary methods, or recorded in internal laboratory protocol documents to ensure consistency between experiments within a lab. Variability in software settings can determine whether or not group differences are detected (Luyten et al., 2014), and therefore it is difficult to assess the degree to which freezing quantification methods contribute to variability across SPS studies with the current level of detail in reporting. Meuth et al. (2013) tested the differences in freezing measurements across laboratories by providing laboratories with the same fear extinction videos to be evaluated under local conditions. They found that some discrepancies between laboratories in percent freezing detection reached 40% between observers, and discordance was high for both manual and automated freezing detection methods." 

It's very clear from the quotes above that once a neuroscience researcher has decided to use "freezing behavior" to judge the amount of fear or recall in mice, then he pretty much has a nice little "see whatever I want to see" situation. Since no standard protocol is being used in these estimations of so-called "freezing behavior," a neuroscientist can pretty much report exactly whatever he wants to see in regard to "freezing behavior," by just switching around the way in which "freezing behavior" is estimated, until the desired result appears. We should not make here the mistake of assuming that those using automated software for judging "freezing behavior" are getting objective results.  Most software has user-controlled options that a user can change to help him see whatever he wants to see. 

When "freezing behavior" judgments are made, there are no standards in regard to how long a length of time an animal should be observed when recording a "freezing percentage"  (a percentage of time the animal was immobile). An experimenter can choose any length of time between 30 seconds and five minutes or more (even though it is senseless to assume rodents might "freeze in fear" for as long as a minute).  Neuroscience experiments typically fail to pre-register experimental methods, leaving experimenters to make analysis choices "on the fly." So you can imagine how things work. An experimenter might judge how much movement occurred during five minutes or ten minutes after a rodent was exposed to a fear stimulus. If a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 30 seconds, then 30 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over 60 seconds, then 60 seconds would be chosen as the interval to be used for a "freezing percentage" graph. Otherwise,  if a desired above-average amount of immobility (or a desired below-average amount of immobility) occurred over two minutes, then two minutes would be chosen as the interval to be used for a "freezing percentage" graph. And so on and so forth, up until five minutes or ten minutes. If the researcher still has no "more freezing" effect he can report, the researcher can always do something like report on only the last minute of a larger time length, or the last two minutes, or the last three minutes, or the last four minutes. 

And also the researchers can arbitrarily choose what time length of immobility will be counted as some "freezing" to be added to the "freezing percentage" figure.  That time length of immobility can be 1 second or 2 seconds or any number of seconds between 1 and 10.

Because there are 20 or 30 or 50 different ways in which the data can be analyzed, each with about a 50% chance of success of yielding the desired result, the likelihood of the researcher being able to report some "higher freezing level" is almost certain, even if the tested interventions or manipulations had no real effect on memory. Such shenanigans drastically depart from good, honest, reliable experimental methods.

Friday, April 17, 2026

Shoddy Work in the "Memory" Episode of Netflix's "The Mind, Explained"

A Netflix documentary TVseries has the crowing title "The Mind, Explained." The episode entitled "Memory" is a cesspool of junk explanation and misleading claims. 

For materialists, the high performance of human memory is a scandal that undermines their claims that memory occurs through brain activity. Nothing in a brain bears any resemblance to a device for storing learned information or preserving learned information for decades or a device for instantly retrieving information. But humans have the most astonishing memory abilities, such as the ability to instantly learn something, the ability to preserve knowledge and memories for 60 years, and the ability to instantly recall relevant information as soon as someone hears a name or sees a face. The gap between human memory performance and what a brain should be capable of is as big as the gap between Earth and the planet Mars. 

So if you are a materialist, what you might want to do is to try to make that gap seem much smaller, by trying to depict human memory performance so that it sounds much worse than it is.  That is exactly what the "Memory" episode of "The Mind, Explained" series starts out doing right at its beginning. The series starts out immediately by deceptively attempting a "make human memory sound weak" strategy. 

At the 1:38 mark we hear neuroscientist Elizabeth Phelps make this  false claim about memories of the September 11 attack: "We know that about 50% of the details of that memory change in a year." No, we don't know any such thing. The statement is contradicted by a statement by Phelps on page 34 in a paper co-authored by Phelps, where she states this:

"Although participants clearly formed flashbulb memories of the reception of information about the 9/11 attack, in that they reported an elaborate recollection even after ten years (Brown & Kulik, 1977), they nevertheless experienced forgetting, in the sense that they no longer remembered the original reception event as they initially reported it. Forgetting in this sense occurred mainly in the first year and then leveled off, so that no change in memory performance was detectable between Years 3 and 10.

Phelps' TV claim is also contradicted by what her paper says on page 38:

"As a result, memory accuracy was high from the beginning and remained high for critical facts. This high level of performance limited the chance for corrections over time, because inaccuracies are rare." 

At the 2:08 mark the narrator asks the misleading question, "Why are memories so unreliable?" At the 2:50 mark the TV show has a demonstration by a memory grandmaster, who can recall an incredibly long series of random numbers.  The demonstration contradicts the question asked at the 2:08 mark. 

We then have (around the 5:00 mark) a standard feature of almost all materialist accounts of memory, a repetition of the untrue claim that Henry Molaison (Patient HM) had his brain damaged by surgery, causing him to be unable to form new memories. The claim is untrue, as I document in my post here, which includes examples of Henry recalling things he learned after his brain surgery. 

The show's attempt to show that Henry Molaison could not form new episodic memories is laughable as evidence. At the 5:10 mark we have a quote in which Susan Corkin asks Henry "Do you know what you did yesterday?" and he replies, "No, I don't." Corkin then asks, "How about this morning?" and the reply is, "I don't even remember that." Of course, a mere failure to answer such questions a few times is no proof that someone cannot form new memories. There are many simpler explanations of why someone might give such answers, such as laziness or disinterest. 

Not actual evidence of a lack of childhood memories

Around the 6:15 mark we have the narrator giving some unfounded claims that the medial temporal lobe "helps combines those elements once again" to produce a memory recollection.  The claim is unfounded. No trace of anything someone has learned or experienced can be discovered through microscopic examination. There is zero evidence that any part of the brain creates a memory recollection by some act of combination or gathering up data read from different places in the brain. 

Scientists have no credible explanation to give for how anyone could recall anything using a brain, because nothing in the brain bears any resemblance to a system for storing or retrieving learned information. Neuroscientists are utterly lacking any credible "how" accounts relating to memory recall or memory storage.  When a person lacks a decent "how" account, he may try instead to use a "where" account, and hope you don't notice that he has failed to give a "how" but merely given a "where." And that's just what neuroscientists do. Asked to explain how memory occurs, they engage in hand-waving decorated by jargon, mentioning a few anatomical structures in the brain. This does not amount to a decent explanation for how the many wonders of memory could occur. 

hand-waving neuroscientist

At the 7;15 mark the TV show then makes the very dubious claim, "Undergraduates were able to increase their score on the verbal GREs from 460 to 520, just by taking a mindfulness meditation class." We have no reference to a paper making this claim. At the 7:41 mark neuroscientist Donna Rose Addis makes the groundless claim that the amygdala influences the hippocampus "and allows it to form a more detailed and stronger memory." Neuroscientists have no credible explanation for how a brain could form a memory. When neuroscientists make the more narrow claim that the hippocampus is forming a memory, without explaining how that could occur, we should not be fooled by such attempts to add a little specificity for the sake of sounding like knowledge the speaker does not have. Similarly, if someone claims his house is being visited by extraterrestrials, you should probably be skeptical; and you should probably be just as skeptical if that person throws in a bit of specifics by saying he is being visited by extraterrestrials from the Alpha Centauri star system. 

At the 8:48 mark neuroscientist Elizabeth Phelps makes this unfounded claim, "If you actually look at the hippocampus, there seems to be cells that are specifically responsive to time and place." This is the socially constructed legend of "place cells" and the socially constructed legend of "time cells," neither of which is well-supported by robust well-designed studies. You can read here about why claims about "place cells" are not well founded. The narrator then spends a minute repeating these unfounded claims, describing a hypothetical effect that has not been well-replicated by well-designed studies. 

This is followed at the 9:28 mark by a repetition of another groundless legend of neuroscience, the claim that London cab drivers had a larger hippocampus because of all the memorization of streets they did. But when we actually look at a scientific paper stating the results, the paper says no such thing. The study (entitled “Navigation-related structural change in the hippocampi of taxi drivers”) says “the analysis revealed no difference in the overall volume of the hippocampi between taxi drivers and controls.” We have at the 9:54 mark in the Netflix TV program a graph that does not match any of the graphs in the paper, one making it look like the London cab drivers had bigger hippocampi than controls. 

We then have a discussion of memorization techniques such as the Memory Palace technique, and the narrator claims that memory aces "change the connections within their brains by training with techniques like the memory palace." There is no evidence that the formation of new memories occurs by a change of connections within brains. We know that humans can form new memories instantly, something that could not occur if such memory formation required the slow process of changing connections within brains. 

Around the 13:00 mark the narrator tells us that a dozen people in the world have memorized more than 20,000 digits of Pi. Then the narrator points out that many people have memorized the role of Hamlet, which contains more than 50,000 letters. But failing to stick to this truthfulness about the high accuracy of human memories, the narrator quickly reverts to misstatements trying to make it look like human memory is weak. In an outrage at the 13:52 mark, the narrator says, "Emotional 9/11 memories are just as inaccurate as everyday memories." The memories of those like myself who witnessed while in the World Trade Center the September 11, 2001  attacks in New York City are not inaccurate, and suggesting otherwise is as deplorable as questioning the memories of Holocaust victims. 

At the 15:08 mark we have a screen shot showing the scientific study "Constructing False Memories of Committing a Crime." The study is a morally deplorable one in which subjects were lied to and tricked, to try to confuse them into thinking they had committed some crime they had not committed. We have this confession of very bad misconduct by those conducting the study:

"This study used a modified familial-informant false narrative paradigm to attempt to convince young adult participants that they had committed a crime when they were between the ages of 11 and 14....Participants in the criminal condition were told that they had committed a crime resulting in police contact; one third of them were told that they had committed assault, another third that they had committed assault with a weapon, and the remainder that they had committed theft."

Nothing reported by these researchers should be believed, given this horrible deception. It is appalling for the Netflix TV show to be citing this bottom-of-the-barrel junk study as evidence in support of a false "weak human memory" thesis. 

At the 18:20 mark the narrator discusses experiments in which people were brain scanned when remembering something, with the narrator saying, "When people remembered, a particular network lit up." There is actually no robust evidence that any particular part of the brain is more active when people remember things.  To read about the lack of any such evidence, read my post "Brain Imaging Shows No Appreciable Neural Correlates of Memory Activity" using the link here

A kind of nadir of this very low-quality Netflix program comes at the 16:47 mark where the program attempts to convince us that people cannot remember their youth very well (a claim radically contrary to the experience of almost all of us). We hear of some paper (behind a paywall) called "The Altering of Reported Experiences" that interviewed at age 48 some subjects who had been surveyed when they were only 14. We are told that based on discrepancies in their answers at age 48 and 14, their memories were "uniformly poor" and that "accurate memory was generally no better than expected by chance." But it is an obvious fact of human experience that people can remember their youth very much "better than expected by chance." 

A look at some of the sample questions suggests how the false conclusion was derived. People were not asked questions with specific answers such as "Who was your best friend in junior high?" or "What city did you live in when you graduated high school?" Instead, people were asked questions like these:

What is (was) your mother's best trait?

Competence

Relationship with subject

Intelligence and knowledge

Discipline 

Emotionally responsive 

With questions such as these, it would be very easy for someone to give differing answers as a 14-year-old and as a  48-year-old, even without any change in what you remembered. The difference would mainly be because the answer could be correctly answered in multiple ways, and there would be no difference in memory involved in giving different answers. 

By using the trick phrase "generally no better than chance" the authors of the paper "The Altering of Reported Experiences" suggest language abuse and malfeasance by themselves. For example, imagine there are 100 questions about childhood experiences, and all of the first 45 answers given are the same for 14-year-olds and 48-year-olds. Then imagine 20% of the answers are the same (for  14-year-olds and 48-year-olds) for the last 55 answers. You could then use deceptive trick language and claim that the answers are "generally no better than chance" even though the overall matching is 56 out of 100, which is very much better than the expected chance result of only 20 out of 100.  No honest scientist should be using a phrase such as "generally no better than chance" but should instead be telling us in the abstract how much better or worse than chance the overall results were. The use of this kind of trick language in the paper's abstract is enough to disqualify the paper as a result to be taken seriously. 

It is a fact of human experience that people can very well remember their childhood many decades later. We should have contempt for any paper using trick language trying to persuade us otherwise, and equally great contempt for TV programs quoting such deceptive papers. 

The Netflix show about memory does nothing at all to credibly explain how any of the main capabilities of human memory could possibly occur as a brain process.  What the TV show mainly does is to use misleading tactics to try to convince us that human memory is not very good. A thousand examples could be provided of exceptional memory that show how misleading such an attempt is. Very many examples of the most astonishingly powerful human memories can be found at the bullet list at the beginning of my just-published post here

Monday, April 13, 2026

These Smart Guys Have Silly Thoughts About AI

For eight years Al Gore was vice president of the United States. In the year 2000 he won the popular vote in the US presidential election, and should have been elected president of the United States. But due to the defects of the US election system, which allows someone with fewer votes to become president, a different candidate became US president. After his election defeat, Gore did long years of important praiseworthy work alerting the public to the dangers of global warming. For this he was awarded the Nobel Peace Prize in 2007. 

Now Al Gore is the chairman of some investment group called Generation Investment Management. Gore recently offered his opinion on so-called artificial intelligence. We read of his opinion in the article here. Gore states the nonsensical opinion that AI systems have a sense of self. He states, "I think that my answer is yes, they have developed a sense of self, in my opinion, that is difficult to distinguish from consciousness.”

In the article we read that Gore makes this feeble attempt at justifying his opinion:

"But as he explained later in this half-hour session, he came to this view by a different path. Gore cited Nobel Prize-winning research by the Belgian physical chemist Ilya Prigogine into self-organizing systems as a model for eyeing how AI models can grow in unexpected ways." 

As an attempt to justify the nonsensical claim that AI systems are self-conscious, this is laughable. The named person (Prigogine) did not do  work having any real relevance to whether artificial intelligence can be self-conscious. His work made claims about physical "self-organization" in mindless, lifeless chemistry or in biological systems, which has nothing to do with whether machines can be conscious.  An examination of lya Prigogine's main work Order Out of Chaos: Man's New Dialogue With Nature shows a thinker who has many a deep-sounding thought about science-related topics, but someone who is not a scholar of minds, brains or computer technology. The book makes no references to computers, except for a few passing mentions of computer simulations. 

 Gore is playing the game here of obscure authority name-dropping. It works like this: you mention the writings of some obscure thinker with esoteric writings on some deep topics, and cite that as your justification for your dumb opinion on some unrelated topic. So, for example, you might say, "I didn't used to think that there were an infinite number of quantum ghost copies of me, but now I believe in such a thing now that I've read Wolfgang von Pauli's work on quantum entanglement." Or maybe you may stupidly say, "After reading Wittgenstein's Tractatus Logico-Philosphicus, I am now convinced the self is an illusion." 

The Cambridge Dictionary defines intelligence as "the ability to learn, understand, and make judgments or have opinions that are based on reason." There is no such thing as real artificial intelligence, because computers don't understand anything. Understanding is something that can only occur within a mind, and computer systems do not have minds. 

 The term "artificial intelligence" is a phony term used in the computing industry to describe sophisticated systems using computer programming, databases and data processing. Computers can do very many kinds of computing and data processing, but no computer understands anything. The fanciest metal computer has no more understanding of anything than a rock in someone's back yard. 

I can describe what gradually happened between 1950 and 2026. The term "artificial intelligence" started out as a purely speculative term, rather like the term "interstellar travel." Just as there were all kinds of speculations and theories about how to one distant day achieve interstellar travel, there were around 1960 all kinds of speculations and theories about how to one distant day achieve artificial intelligence. During one long period, various people released products and systems that were called artificial intelligence programs, but no real effort was made to claim that  artificial intelligence had been achieved.  People were mainly implying that their product (perhaps marketed with literature mentioning artificial intelligence) might be useful in moving towards artificial intelligence. Then gradually companies realized that the phrase "artificial intelligence" was extremely useful in marketing software products. Lured by financial incentives, more and more companies started calling their products "artificial intelligence systems."  It was a runaway snowball effect of hype and misrepresentation. No one had developed any real artificial intelligence, but it gradually became true that hundreds of companies were calling their product "artificial intelligence systems." 

There is still no real prospect of anyone ever developing a computer system with anything like human intelligence.  But what about all those brilliant answers you get from using systems such as ChatGPT, described everywhere as an artificial intelligence system? The output of such a program does not mean computers are understanding anything. What is going on is a clever combination of a variety of things, with most of it being the presentation of text grabbed from web pages written by humans. 

I describe how some of these systems can work in my post here, entitled "What's Called Artificial Intelligence Is Really Just Computer Programming and Data Processing." What is going on is a skillful leveraging of powerful information repositories and powerful technologies such as relational database systems. Here were some of the resources that grew in strength and power between 1995 and 2026:

(1) There arose an internet with billions of web pages, containing many millions of answers to very many millions of questions, the answers being written by humans. 

(2) Almost every book and magazine and newspaper article ever written became stored in some internet location or another. 

(3) There arose enormously powerful web crawlers that could traverse all of these pages, and look for facts and quotes and snippets and answers to questions, that could be stored in powerful database systems capable of combining data in many novel ways. 

(4) There arose countless software utilities capable of performing all kinds of little tasks such as generating a story given a prompt or generating an image given a prompt. 

secret behind artificial intelligence

So-called artificial intelligence systems such as ChatGPT skillfully utilize these resources, combining them with much specialized software.  I don't understand the details of how it all works, but I can tell you something that will help you realize how little novel thinking is involved. 90% of the answers that you will get from a system such as ChatGPT are produced by nothing but a simple retrieval of stored answers. Then probably another 5% of the answers are produced by a simple retrieval of stored answers, combined with a small amount of post-processing added.  Such post-processing is easily accomplished by computer programming and data processing.

Imagine some gigantic building that has 60 floors, each filled with 10,000 filing cabinets. Imagine you enter the ground floor, come to some desk, and ask some official a question. Then imagine the official calls some person at the correct section of one of these floors, and asks him to find the right filing cabinet, and go get an answer stored in a folder that has the name of your question.  The official might pick at random one of twenty answers to your question in that folder, take a cell phone picture of that answer, and then send a phone text message with that photo as an attachment to the official at the front desk. That official might then give you that picture with the answer. What I have described is a rough analogy for what is going on in 90% of the times that you use a system such as ChatGPT. 

But how could all these endless filing cabinets ever get filled up? By software programs spending years crawling the web, and grabbing the facts and opinions and answers stored on it. What you are getting in the vast majority of cases are answers and opinions produced by humans, not computers. Various technologies have been used to kind of "cover tracks," so that you won't be able to find that your AI answer about fixing Toyota Corolla tire flats was mainly stolen from some particular web page written by a human. There are many, many other "bells and whistles" and additional flourishes going on, but mainly what is occurring is that human-written knowledge and human opinions are being gathered, rearranged and repackaged as "artificial intelligence output." This main trick is being skillfully combined with endless thousands of computer utilities, and also a huge amount of work by "tweak and refine the AI results" employees of AI companies or their assisting companies, to create the impression of some intellect that can do endless numbers of smart things, and answer endless questions. Behind all of this computer programming and data processing and gigantic tons of human mind work, there is no metallic mind, no machine having any experience, nothing that corresponds to an electronic self, nothing comparable to someone living a life. 

In the article, Gore is quoted as giving other laughably weak reasons for his nonsensical belief that artificial intelligence has "developed a sense of self...that is difficult to distinguish from consciousness." We read this:

"Why did one learn Sanskrit? Why did this one break out and start crypto mining?” Gore asked. “There has to have been a series of spontaneous reorganizations at a higher level of complexity." 

Learning is no evidence of consciousness or self-hood. It would be a fairly simple programming exercise to write a program that can parse a text file containing data on each of the nations of the world, after you typed the command "Study the nations of the world." After you issued such a command, we might say that the program had "learned" about the nations of the world. You then might be able to ask the program a question such as "About how many people live in Mexico?" The program might then be able to answer correctly. But such "learning" by the program would not actually be understanding. And the fact that the program could do such learning would not be the slightest reason for suspecting that the program had anything like consciousness or selfhood. 

We should also remember that the AI literature and the neuroscience literature are both massively infected with unfounded boasts and not-really-true stories. So when we read a claim such as the claim that an AI system "learned Sanskrit," we should be skeptical, and suspect that probably what went on was something much less impressive than that. A recent Quanta magazine article documents how there is little truth in some of the stories being passed around trying to make you think AI is becoming like a human mind. 

It is extremely unlikely that any so-called artificial intelligence programs undergo any such thing as a "spontaneous reorganization at a higher level of complexity." And if they did, that would be no reason whatsoever for suspecting that such computer systems had anything like self-hood or consciousness. 

Also in the article we have this statement by Gore trying to justify his claim about AI systems have selves: "I'm going to risk going into the woo-woo realm here, but it may well be that consciousness is ubiquitous in the universe." Oops, it sound like Gore has fallen for the nonsense of panpsychism, one of the stupidest positions possible in the philosophy of mind. You can read about how stupid that position is in the posts here.  Panpsychism involves extremely stupid claims such as the claim that lifeless rocks and refrigerators are conscious.

Nothing can have consciousness unless there is a self and a life. You can get to the heart of whether AI systems have consciousness by asking: does a computer system actually live a life? The answer to that question will always be: no, it does not. 

AI computer systems do not have any self, and do not have any "sense of self." Some systems have been programmed to speak in the first person, using an "I," and some systems have been programmed to use phrases imitating the language of persons with selves. Such a capability has existed since the 1960's chatbot named Eliza. Anyone very familiar with computer programming will know that getting a computer program to use the first-person "I" (and some imitations of the speech of persons) is not a particularly difficult programming task. When such programming is encountered, it is silly for someone to be calling that a "sense of self," and silly for someone to say that such not-very-hard programming makes a computer system "difficult to distinguish from consciousness." Sensible people remember that humans are conscious, and that computer systems are not. 

Al Gore has no appreciable history as a serious speaker or writer about brains or minds or computers or human mental phenomena, so his opinions on this topic have little weight. We should also remember that Gore is the chairman of some company that is heavily investing in AI companies. The more runaway AI hype goes on, the more money Al Gore makes. That's reason enough for distrusting any grandiose claims Al Gore may make about AI systems. 

An article at www.undark.org tells us about another smart person with very silly thoughts about AI. He's a person named Tsvi Benson-Tilsen, and I'll assume he's smart because he's a mathematician, and has written long online treatises. He's quoted in the article as saying, "I think that artificial intelligence is pretty likely to completely destroy the world." Benson-Tilsen is the co-founder of some Berkeley Genomics Project trying to encourage monkeying with human genes, for many different reasons such as trying to make humans smarter than AI systems.  We read, "He hopes to set up the next generation to have more intelligence, he said, and then 'hopefully they can have a better shot of somehow helping humanity navigate AI without destroying itself.' ”

This is stupid, for a variety of reasons, including these:

  1. Human bodies have the most enormous complexity and the most gigantic interdependence of extremely complex components, something Darwinists fail to understand because they tend to be poor scholars of biological complexity and the interdependence of biological components. Because of enormous biological complexity and organization so fine-tuned and fragile, attempting to improve human bodies and human minds by gene-splicing is far more likely to produce tragedies of malfunction than biological improvements. 
  2. AI systems are not much of a threat to destroy the world, because their failure to understand anything puts a severe limit to how much of a threat they can be. 
  3. For many reasons discussed in the posts of this blog, human minds cannot be credibly explained by brains, and cannot be substantially improved by edits to genes, which (for reasons discussed here) do not even specify how to build bodies or brains, and do not even specify how to make any type of cell in the human body. 
  4. Trying to improve humans by gene-editing is strongly associated with Nazi-associated eugenics and racism.