Thursday, October 2, 2025

How Guys Write Groundless "This Is How Your Brain Stores Memories" Articles

A type of article that shows up periodically in the literature of neuroscience is an article with some title such as "How the Brain Stores Memories." All such stories are bogus examples of groundless claims and hand-waving. No one has any understanding of how a brain could store memories. There is nothing in the brain that bears any resemblance to a device for writing memories; there is nothing in the brain that bears any resemblance to a device for storing memories for many years; and there is nothing in the brain that bears any resemblance to a device for retrieving memories. 

Let's look at a recent example of this type of misleading article, and try to derive from it some principles about how people go about writing articles of this type. The article is one published in Forbes magazine. It is entitled "Timing Is Everything: How Our Experiences Become Memories." The author (William A. Haseltine) gives us an appalling  example of someone pretending to understand things he does not understand.  The article describes him as someone who "covers genomics and regenerative medicine."  That's already a reason for suspecting the accuracy of his article. The author is apparently not an expert in the field of cognitive neuroscience or memory. 

The first sentence of the article is: "Memories are created in a matter of seconds." That's correct; humans can form permanent new memories instantly (such as when a son or daughter learns that their parent has died).  But Haseltine fails to put two and two together by realizing that this fact of instant memory creation rules out every explanation he attempts to give. The processes he describes (mainly synapse strengthening) are mostly sluggish processes requiring hours or days or many minutes.  Processes so sluggish cannot be an explanation for the creation of a new memory, which can occur instantly. And since synapses are don't-last-for-years things built from proteins which have an average lifetime less than two weeks, unstable synapses cannot be an explanation for human memories that can reliably persist for 50 years. 

Trying to explain how a brain could create a memory, Haseltine gives us this vacuous bit of hand-waving:

"From the moment the brain receives a sensory input (i.e. sight, sound, smell, etc.), neurons across the brain activate. Connections formed between these neurons give rise to dynamic neural networks called engrams. For example, when exploring a new city, an engram forms and continuously updates as you walk down various streets and turn corners. The moment you finally encounter the landmark you have been searching for, there is a burst of neural activity. Neurons that were activated seconds prior also increase their firing. Your brain consolidates this information into a mental map of how to get to the landmark. Engram formation, therefore, depends not only on neurons firing simultaneously but also on those that activate immediately before and after. This is known as behavioral timescale learning. "

The term "engram" is a vacuous bit of speculation that does not correspond to any well-established scientific reality.  The term means an alleged place where a memory is stored in the brain. Microscopic examination of brain tissue has never revealed the existence of any such thing as engrams. No one has ever found information someone learned in school by microscopically examining tissue from that person's brain. When biologists use the term "engram" they are speculating as wildly as when astrophysicists use the terms "dark matter" and "dark energy." 

The account that Haseltine is giving here makes no sense, given the reality of instant memory creation. Neural connections are the synapses between brains. It takes at least days for a new synapse to appear between neurons. So it is misleading bluffing for Haseltine to be telling us a story of the arising of "dynamic neural networks" as an explanation for memory creation. 

Haseltine's next paragraph is just a mention of what goes on all the time in the brain, something that does nothing at all to explain memory creation. He says this:

"When a neuron activates, an action potential is generated. First, an electrical or chemical input stimulates a dendritic branch on the neuron. If the stimulus is strong enough, a branch becomes activated. The signal travels through the cell body and into the neuron’s axon. The activated axon releases chemical messengers called neurotransmitters to activate other cells in the network. Neural activation lasts just two milliseconds before the cell resets to allow another action potential to be generated."

Haseltine's next paragraph begins with a statement of fact, and two statements of utterly unproven speculation, wrongly  stated as if they were fact. He states this:

"Generating action potentials is the basis of all brain activity. During learning, action potentials transmit signals that encode new experiences. A key region involved in this process is the hippocampus. Here, the brain consolidates short-term memories into long-term memory." 

Yes, generating action potentials is the basis of all brain activity. No, there is no evidence that "action potentials transmit signals that encode new experiences."  No one understands how things that humans see and hear could ever be encoded or translated into some format that would allow memories to be stored as synapse states or neural states.  And there is no evidence that "the brain consolidates short-term memories into long-term memory" in the hippocampus or any other place.  We merely know that human beings can have short-term memories that don't last for long and long-term memories that are permanent.  Neuroscientists have no understanding of how a brain could create either short-term memories or long-term memories. 

Haseltine goes on and on, mainly mentioning sluggish things that go on in the brain that are way too slow to credibly account for instant memory formation. For example, he states this:

"Repeated stimulation from a presynaptic neuron (the neuron sending the signal) to a postsynaptic neuron (the neuron receiving the signal) triggers molecular changes. First, neurotransmitters released by the presynaptic neuron bind to receptors on the postsynaptic neuron. When neurotransmitters bind to these receptors, channels open that allow calcium ions to enter the neuron." 

Using Haseltine's article as an example, I can give a general outline for how people write bogus groundless articles with titles such as "This Is How Your Brain Stores Memories."  The steps are basically these:

Step 1: Accumulate a list of things that are constantly going on in the brain, things that occur at various timescales ranging from seconds, hours, days or weeks.  

Such a list may include:

(a) the transmission of action potentials across chemical synapses, something that is constantly occurring billions of times every second in the brain;

(b) the transmission of neurotransmitter chemicals across such synaptic gaps, something that constantly occurs;

(c) the strengthening of existing synapses, something that takes many hours or days, and that goes on constantly regardless of whether anyone is learning or having sensory experiences;

(d) the creation of new synapses between neurons, something requiring days or weeks;

(e) the growth of new dendritic spines, which takes days or weeks;

(f) the changing in the size of dendritic spines, which takes  days or weeks.

(2) Write some account of people acquiring a new memory or learning something, while blending the account with as many items on this list of types of neural activity, paying no attention to the time required for the types of neural activity. 

This is exactly what Haseltine has done in his article.  He has left out any discussion of the hours and days required for the processes he mentions, so that the reader will not notice what he has discussing is way, way too slow to explain instant human learning. 

(3) Do a little science-jargon sprinkling, by doing things such as using the speculative term "engrams," by making vacuous uses of the terms "encoding" and "consolidation," and by maybe referring to some part of the brain such as the hippocampus or referring to some type of protein, claiming that it "plays a role" in memory formation. 

Haseltine's only use of the word "encode" or "encoding" is vacuous,  when he claims that  "during learning, action potentials transmit signals that encode new experiences." The failure of neuroscientists to articulate any credible theory of neural encoding of memories -- and their failure to show any robust evidence of such a thing occurring -- are huge reasons for rejecting claims of the brain storage of memories.  If memories were to be stored in brains, there would have to be some gigantically complicated encoding scheme a million times more complicated than the genetic code, something capable of converting the huge variety of things humans can learn into synapse states or brain states.  There is no evidence that any such thing exists, and no neuroscientist has even stated a credible detailed theory of how such a thing could work. Haseltine also uses the neuroscience jargon word "consolidation" or "consolidate,"  but all of his references are vacuous, and never refer to any evidence for consolidation or a theory of consolidation. 

(4) Refer to some recent paper, typically some poorly designed study using way too few subjects and poor methods, almost always something merely involving mice. 

This is exactly what Haseltine. He refers us to only the poorly designed study "Dendritic, delayed, stochastic CaMKII activation in behavioural time scale plasticity," a mouse study that fails to even tell us how many mice were used. Whenever that happens, it is invariably because some way-too-small study group size was used such as only 7 mice.  The study makes no mention of the use of blinding, meaning it is a Questionable Research Practices affair we should not trust. A study that fails to tell how many subjects it used should not be trusted about anything. 

This scientific paper says the following:

"Previous models have suggested that CaMKII functions as a bistable switch that could be the molecular correlate of long-term memory, but experiments have failed to validate these predictions....The CaMKII model system is never bistable at resting calcium concentrations, which suggests that CaMKII activity does not function as the biochemical switch underlying long-term memory."

This recent scientific paper says on page 9, "Overall, the studies reviewed here argue against, but do not completely rule out, a role for persistently self-sustaining CaMKII activity in maintaining" long term memory. 

(5) Skip the issue of the lifelong persistence of memories, because scientists have no clue as to how that could occur in a brain with such high molecular turnover.  

(6) Skip the issue of instant memory recall, since the brain has nothing like any of the things that enable that in machines, things such as addressing, sorting and indexing. 

neuroscientist explanation of memory

An example of a vacuous "brain explanation" article is the Mayo Clinic's page entitled "How Your Brain Works." The page fails to tell us how a brain could perform any aspect of human mentality. What we mainly have is a discussion of different parts of the brain, and how neurons transmit chemical and electrical impulses. We have the thinnest of localization claims about the function of different parts of the brain, which are each one-sentence affairs completely lacking in details that might explain things. For example, we are told "The frontal lobes help control thinking, planning, organizing, problem-solving, short-term memory and movement." We have no explanation of how that might happen. We are told "t
he occipital lobes process images from your eyes and connect them to the images stored in your memory." But where does the writer think that memories are stored in the brain? He does not say. And how could such a storage of memory ever occur? The writer does not say. And how could a brain ever instantly recall something as soon as you hear a name or see an image? The writer does not say. 

All that is going on in the Mayo Clinic's page entitled "How Your Brain Works" is hand-waving and description of parts of the brain, without any explanation at all as to how could the brain produce any cognitive effect.  It's just the kind of article we might expect to get if claims of brains producing minds and brains storing memories were misconceptions, kept afloat by a constant repetition of socially constructed myths. 

There's a general rule of thumb about credibly explaining any very impressive result in biology, which is: in general, it is enormously and exponentially more difficult than you might think at first, because of the failure of the human mind to conceive all the difficulties involved.  Consider the case of trying to explain the origin of the human body. 

Darwin started out by trying to explain the human body by the childishly simple idea that organisms undergo random changes, and that the luckier changes survive better. But a close consideration of the problem reveals a plethora of difficulties with that idea:

  • A simple variation in just the body of one member of an organism would not explain how the species of that organism got some feature, because the variation would have to be an inheritable variation; and it is generally believed that acquired characteristics are not inherited. 
  • Any variation would require not just variation in the structure and internal arrangement of one organism, but presumably a variation in some schema that controlled such a structure and internal arrangement of the organism; and any such change would tend to be diluted in offspring that had a mixture of inheritance from male and female. 
  • Almost any lucky variation would be lost in subsequent generations, as the offspring of such generations would mainly have inheritance from organisms not having the lucky variation. 
  • There does not even exist within the body of any organism some schema that specifies the appearance, behavior traits, structure and internal arrangement of the organism.  The only known unit of inheritance is DNA and its genes, and such things are not a blueprint, recipe or program for making the body of any organism, contrary to the lies and misleading statements that biologists have told on this topic. DNA and its genes specify only low-level chemical information such as which amino acids go in particular proteins, not high-level instructions for anatomical assembly. 
  • Because of factors discussed here such as the general uselessness of  early stages, nonfunctional intermediates and the interdependence of extremely complex components all required for many types of biological function, random variations in organisms or some genetic material they use does not stand as a credible theory explaining the origin of any very complex creature (such as humans)  requiring a very hierarchical arrangement of a huge number of well-arranged and interdependent parts. 
Just as there are "show stoppers" and unsolvable dilemmas all over the place in trying to describe some random natural process producing great wonders of physical functionality, there are "show stoppers" and unsolvable dilemmas all over the place in trying to describe how a brain could produce the wonders of human memory.  Here is just one of them:  given that the human brain has not changed substantially since 2000 B.C, and given that the English alphabet and language is only centuries old rather than thousands of years old, how there could ever occur by brain action an event such as as the well-verified event of an old man memorizing the entire text of Milton's Paradise Lost, a poem of almost 80,000 words? 

And how many words or characters of the English language have been found by a microscopic examination of brain tissue? Not a single word or character, even though very much freshly extracted tissue from living persons has been microscopically examined, and even though the entire brains of many recently dying people have been microscopically examined by neuroscientists eager to find a trace of some learned knowledge, without having any success. 

Sunday, September 28, 2025

Irredeemable: Reproducibility and Power Size in Neuroscience Are Very Bad, and Not Getting Any Better

 A recent study offers some encouraging news about psychology research. The paper is entitled "Increasing Sample Sizes in Psychology Over Time." The paper reports this:

"We collected data from 3176 studies across six journals over three years. Results show a significant increase in sample sizes over time (b=44.83, t(6.25)=4.48, p=.004, 95%CI[25.23,64.43]), with median sample sizes being 40 in 1995, 56.5 in 2006, and 122.5 in 2019. This growth appears to be a response to the credibility crisis....The increase in sample sizes is a promising development for the replicability and credibility of psychological science."

The credibility crisis referred to is the widely reported reproducibility crisis in fields such as psychology and neuroscience.  For decades it has been reported that experimental studies in psychology and neuroscience tend to be unreliable and poorly reproducible, largely because the sample sizes used were way too small.  This was commonly called a "reproducibility crisis in psychology," although it was very much a reproducibility crisis in both psychology and neuroscience. A tendency to produce studies with too-small sample sizes was just as prevalent in neuroscience as psychology. 

Psychology experiments typically involve humans, and advances in internet technology may have been a factor helping to lead to increased study group sizes in psychology. Decades ago a scientist might have found it necessary to recruit subjects to come into some laboratory where an experiment can be done. But now there are online platforms that allow people to sign up to be subjects in psychology experiments, while being paid for their efforts. This provides a very large pool of potential test subjects. A psychologist can now run experiments using subjects from across the USA or even multiple countries, by designing some experiment that subjects can participate in over the internet, while the subjects stay in the comfort of their homes.  

But while there may have been an increase in study group sizes used in psychology experiments, there has apparently been no such increase in the field of neuroscience. How could you honestly describe the state of experimental neuroscience? You might honestly describe it as an irredeemable cesspool consisting mostly of junk science studies that continue to have the same old fatal defects such as the use of way-too-small study group sizes. Well-designed studies in cognitive neuroscience seem to be in the minority, and are outnumbered by junk science studies guilty of very bad Questionable Research Practices. 

Scientific studies that use small sample sizes are typically unreliable, and often present false alarms, suggesting a causal relation when there is none. Such small sample sizes are particularly common in neuroscience studies, which often require expensive brain scans, not the type of thing that can be inexpensively done with many subjects. In 2013 the leading science journal Nature published a paper entitled "Power failure: why small sample size undermines the reliability of neuroscience." There is something called statistical power that is related to the chance of a study producing a false alarm. The Nature paper found that the statistical power of the average neuroscience study is between 8% and 31%. With such a low statistical power, false alarms and false causal suggestions will be very common. 

A scientific study with a statistical power of 50% is one that will have about a 50% chance of being successfully reproduced when someone attempts to reproduce it. Even when a statistical power of 50% is reached, the statistical power is not high enough for robust evidence to be claimed.  In order to be robust evidence for an effect, a study much reach a higher statistical power such as 80%.  When that power is reached, there is about an 80% chance that an attempt to reproduce the results will be successful. 

The Nature paper said, "It is possible that false positives heavily contaminate the neuroscience literature." 

An article on this important Nature paper states the following:

"The group discovered that neuroscience as a field is tremendously underpowered, meaning that most experiments are too small to be likely to find the subtle effects being looked for and the effects that are found are far more likely to be false positives than previously thought. It is likely that many theories that were previously thought to be robust might be far weaker than previously imagined."

Scientific American reported on the paper with a headline of "New Study: Neuroscience Gets an 'F' for Reliability."

So, for example, when some neuroscience paper suggests that some part of your brain controls or mediates some mental activity, there is a large chance that may simply be a false positive. As this paper makes clear, the more comparisons a study makes, the larger a chance for a false positive. The study has an example: if you test whether jelly beans cause acne, you'll probably get a negative result, but if your sample size is small, and you test 30 different colors of jelly bean, you'll probably be able to say something like "there's a possible link between green jelly beans and acne"  -- simply because the more types of comparisons, the larger the chance of a false positive.  So when a neuroscientist tries to look for some part of your brain that causes some mental activity, and makes 30 different comparisons using 30  different brain regions, with a small sample size, he'll probably come up with some link he can report as "such and such a region of the brain is related to this activity." But there will be a high chance this is simply a false positive.  

bad neuroscience lab

The 2013 "Power Failure" paper discussed above was widely discussed in the neuroscience field, but a 2017 paper indicated that little or nothing had been done to fix the problem. Referring to an issue of the Nature Neuroscience journal, the author states, "Here I reproduce the statements regarding sample size from all 15 papers published in the August 2016 issue, and find that all of them except one essentially confess they are probably statistically underpowered," which is what happens when too small a sample size is used. 

A 2017 study entitled "Effect size and statistical power in the rodent fear conditioning literature -- A systematic review" looked at what percentage of 410 experiments used the standard of 15 animals per study group (needed for a moderately compelling statistical power of 50 percent).  The study found that only 12 percent of the experiments met such a standard.  What this basically means is that 88 percent of the experiments had low statistical power, and are not compelling evidence for anything.


low statistical power in neuroscience


The 2017 scientific paper "Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature" contains some analysis and graphs suggesting that neuroscience is less reliable than psychology. Below is a quote from the paper:


"With specific respect to functional magnetic resonance imaging (fMRI), a recent analysis of 1,484 resting state fMRI data sets have shown empirically that the most popular statistical analysis methods for group analysis are inadequate and may generate up to 70% false positive results in null data. This result alone questions the published outcomes and interpretations of thousands of fMRI papers. Similar conclusions have been reached by the analysis of the outcome of an open international tractography challenge, which found that diffusion-weighted magnetic resonance imaging reconstructions of white matter pathways are dominated by false positive outcomes  Hence, provided that here we conclude that FRP [false report probability] is very high even when only considering low power and a general bias parameter (i.e., assuming that the statistical procedures used were computationally optimal and correct), FRP is actually likely to be even higher in cognitive neuroscience than our formal analyses suggest.

The paper draws a shocking conclusion that most published neuroscience results are false. The paper states the following: "In all, the combination of low power, selective reporting, and other biases and errors that have been well documented suggest that high FRP [false report probability] can be expected in cognitive neuroscience and psychology. For example, if we consider the recent estimate of 13:1 H0:H1 odds, then FRP [false report probability] exceeds 50% even in the absence of bias." The paper says of the neuroscience literature, "False report probability is likely to exceed 50% for the whole literature." 

In June of 2025 I searched on Google Scholar, trying to find some paper reporting on an improvement of sample sizes in neuroscience research. I could find no such paper. The sample sizes used in neuroscience research are very bad, and are not getting any better. Today's neuroscience research is a cesspool of dysfunction and misleading claims. There are no signs that it is improving its horribly dysfunctional ways. 

Why does this situation persist? There are two main reasons: economics and ideology. 

The economic explanation for bad science practices is explained rather well in the paper "The Natural Selection of Bad Science" by Paul E. Smaldino and Richard McElreath. In that paper we read this:

"Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery....We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates."

The paper has a shocking confession by a scientist who has worked on search committees searching for scientists to be hired. The scientist states this:

"I’ve been on a number of search committees. I don’t remember anybody looking at anybody’s papers. Number and IF [impact factor] of pubs are what counts."

This is a description of an economic ecosystem in which what  determines a scientist's career advancement is not the quality and reliability of the papers he has published, but the mere quantity of such papers, and how many citations such papers are getting. 

The paper ("The natural selection of bad science") states this: "In fields such as psychology, neuroscience and medicine, practices that increase false discoveries remain not only common, but normative." In this context "normative" means "more the rule than the exception." The paper states, "Some of the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures." Later the paper gives us some insight on the economics that help to increase the likelihood of scientists producing lots of low-quality research papers:

"If researchers are rewarded for publications and positive results are generally both easier to publish and more prestigious than negative results, then researchers who can obtain more positive results—whatever their truth value—will have an advantage. ...One way to better ensure that a positive result corresponds to a true effect is to make sure one’s hypotheses have firm theoretical grounding and that one’s experimental design is sufficiently well powered. However, this route takes effort and is likely to slow down the rate of production. An alternative way to obtain positive results is to employ techniques, purposefully or not, that drive up the rate of false positives. Such methods have the dual advantage of generating output at higher rates than more rigorous work, while simultaneously being more likely to generate publishable results. Although sometimes replication efforts can reveal poorly designed studies and irreproducible results, this is more the exception than the rule. For example, it has been estimated that less than 1% of all psychological research is ever replicated  and failed replications are often disputed. Moreover, even firmly discredited research is often cited by scholars unaware of the discreditation. Thus, once a false discovery is published, it can permanently contribute to the metrics used to assess the researchers who produced it....Campbell’s Law, stated in this paper’s epigraph, implies that if researchers are incentivized to increase the number of papers published, they will modify their methods to produce the largest possible number of publishable results rather than the most rigorous investigations."

What the paper is suggesting is that junk science is strongly incentivized in today's science research ecosystem.  A scientist is more likely to succeed in academia if he produces a high quantity of low-quality research papers than if he produces a lower quality of high-quality research. There are several online sources that keep track of the number of papers that a scientist wrote or co-wrote, and the number of citations such papers got.  There are no online sources that keep track of the quality and reliability of the papers that such a scientist produced.  In such an environment, a scientist will be more likely to get ahead if he produces many low-quality papers rather than a smaller number of papers that are more reliable and truthful in the results reported. 

junk science practices

The economic motivations of badly behaving neuroscientists and similar bad actors are sketched in my diagram below, and the post here explaining the diagram. At the top left corner is the starting point of "quick and dirty" experimental designs with way too few subjects. The diagram charts how various types of people in various industries benefit from such malpractice. 

academia cyberspace profit complex

Another huge explanatory factor that helps explain the massive persistence of junk neuroscience studies is ideology. What we should never forget is that neuroscientists are members of a belief community.  That belief community is dedicated to promoting various dubious belief dogmas such as the dogma that the brain is the source of the human mind, and the dogma that the brain is the storage place of human memories. So in many cases junk science studies that a peer reviewer or an editor would normally be ashamed to approve for publication will be approved for publication, because the study appears to support some dogma or narrative that is cherished by members of the neuroscientist belief community. 

church of academia

The main beliefs of the neuroscientist belief community are false beliefs.  Because of innumerable reasons discussed on this blog, there is no credibility in the claim that the brain is the source of the human mind, and there is no credibility in the claim that the brain is a storage place of human memories. When the beliefs of a belief community are true, the community does not need to rely on studies involving bad science practices or bad scholarly practices.  But when the beliefs of a belief community are false, that belief community may need to keep producing studies involving bad science practices or bad scholarly practices. That way the belief community can try to maintain an illusion that the evidence is favoring its cherished beliefs. 

Thursday, September 25, 2025

Consciousness Shadow-Speaking Is Only a Fraction of Materialism's Complexity Coverup

 Let us imagine someone showed you some cards on a table in his backyard, some cards that had been arranged into something that looked like a house of cards. Suppose the person tried to persuade you that no one had purposefully arranged the cards into such a house-like structure, and that the structure had appeared because of purely random, accidental effects (such as say, friction or random wind gusts). The more complex such a house of cards was, the less likely you would be to believe such an explanation.

If the person showed you a “house of cards” consisting of only one card leaning against another card to make an upside-down “V” shape, you might easily be willing to believe the person's theory of accidental construction. But suppose the person showed you a “house of cards” consisting of twenty cards. You then would be vastly less likely to believe the person's theory of accidental construction. If the person showed you a “house of cards” consisting of 50 cards, like the one below, you would never accept any theory that such an intricate and hard-to-achieve arrangement had occurred by chance.


Speaking more generally, I can evoke a general rule: what I can call the first rule of accidental construction.

The first rule of accidental constructionthe credibility of any claim that an impressively organized final result was accidentally achieved is inversely proportional to the number of parts that had to be well-arranged to achieve such a result, and the amount of organization needed to achieve such a result.

Because of this general rule, the rule that the more functionally organized something is the less likely it arose accidentally, there is a very important relation between the degree of organization and complexity in biological organisms and the credibility of Darwin's theory of natural biological origins. The relation is that the credibility of Darwin's theory of natural biological origins is inversely proportional to the degree of organization and functional complexity in biological organisms. The more organized and functionally complex that biological organisms are, the less likely that they might have appeared because of any accidental process.

Biological organisms have enormously high levels of organization. We know how to put together piece by piece aircraft carriers equipped with all their jets, but there is no team of scientists that could ever put together from scratch a living human body, by a molecule by molecule arrangement of parts.  The human body has a more impressive degree of organization and complexity than any machines humans have ever manufactured. So the advocates of Darwinism have a tough situation. If they realistically depict organisms such as humans as being as functionally complex and hierarchically organized as they are, all attempts to sell Darwinism will be undermined. So the asvocates of Darwinism routinely attempt to portray organisms and their parts as being vastly less complex than they are.

Again and again (particularly when speaking to the general public), mainstream biologists will give us kind of kindergarten sketches of biological life, in which organisms or their parts are depicted as being enormously simpler than they are. They use a series of tricks by which people may be fooled into thinking that organisms and their parts are a hundred times simpler or a thousand times simpler or a million times simpler than they are.

To perform such a concealment, biologists very often engage in what I call shrink-speaking, which is misleadingly describing something as if it were vastly simpler than it is.  A person who describes the United States of America as "just a bunch of buildings" is engaging in shrink-speaking, as does a scientist who refers to you as "a bunch of chemical compounds." The same shrink-speaking would occur if someone described the volumes of a public library as "just some ink marks on paper."

Below are some of the tricks that are used as part of this gigantic complexity concealment. The tricks are most commonly used when professors are writing books for the general public or articles designed to be read mainly by the general public. Conversely, inside scientific papers rarely read by the public, professors often discuss the vast amount of organization and functional complexity of living things. 

Trick #1: The Frequent Mention of “Building Blocks of Life”

Scientists and science writers have long claimed that “building blocks of life” were produced by certain experiments, although the claims made along these lines are very erroneous and misleading. More baloney has been written about the Miller-Urey experiment (an experiment claimed to have produced “building blocks of life”) than almost any other scientific topic.

Without reviewing the huge number of misstatements that have been made about origin-of-life research, we can merely consider how utterly misleading is the very phase “building blocks of life.” The very term suggests that the simplest life would be something very simple. When we think of what is made from building blocks, we can think of something as simple as a wall or a simple house made of cinder blocks or bricks. But all cells are incomparably more complex than a simple house.

The building components of cells are organelles, which make up even the simplest prokaryotic cells. The building components of organelles are proteins and protein complexes, and the building components of proteins are mere amino acids. Whenever anyone describes an amino acid or a nucleobase as a “building block of life.” that person is misrepresenting the complexity of life. An amino acid is merely a building component of a building component (a protein) of a building component  (an organelle) of a cell.

Trick #2: Misleading Cell Diagrams

A staple of biological instruction materials is a diagram showing the contents of a cell. Such diagrams are usually profoundly misleading, because they make it look like a cell is hundreds or thousands of times simpler than it actually is.

Specifically:

  • A cell diagram will typically depict a cell as having only a few mitochondria, but cells typically have many thousands of mitochondria, as many as a million.
  • A cell diagram will typically depict a cell as having only a few lysosomes, but cells typically have hundreds of lysosomes.
  • A cell diagram will typically depict a cell as having only a few ribosomes, but a cell may have up to 10 million ribosomes.
  • A cell diagram will typically depict one or a few stacks of a Golgi appartus, each with only a few cisternae, but a cell will typically have between 10 and 20 stacks, each having as many as 60 cisternae.
  • Cell diagrams create in the mind the idea of a cell as a static thing, when actual cells are centers of continuous activity, like some active factory or a building that is undergoing continous construction and remodeling. 
misleading cell diagram

Trick #3: Claiming that a Human Could Be Specified by a Mere “Blueprint” or “Recipe,” and Claiming DNA Has Such a Thing

The idea that DNA is a blueprint or recipe for making a human being is a claim that is both false and profoundly misleading, giving people a totally incorrect idea about the complexity of human being. It is false that DNA has any such thing as a recipe or blueprint for making a human. DNA contains only low-level chemical information such as information about the amino acids that make up proteins. DNA does not contain high-level structural information. DNA does not specify the overall body plan of a human, does not specify how to make any organ system or organ of a human, and does not even specify how to make any of the 200 types of cells in the human body. See this post for the statements of 18 science authorities telling you that DNA is not a blueprint or a recipe for making organisms.

Besides giving us an utterly false idea about the contents of DNA, claims such as “DNA is a blueprint for making humans” or “DNA is a recipe for making humans” create false ideas about the complexity of human beings. A blueprint is a large sheet of paper for doing a relatively simple construction job. A recipe is a single page for doing a relatively simple food preparation job. So whenever we hear people say something “DNA is a blueprint for making a human” or “DNA is a recipe for making a human,” we think that the construction of a human is a relatively simple affair. In reality, a human body is many thousands or millions of times too complex to be constructed from any blueprint or recipe.

If you were to give an analogy that would properly convey how complex would be the instructions needed for building a human, you might refer to something like a “a long bookshelf filled with many volumes of construction blueprints" or a “long bookshelf filled with recipe books.” But even such analogies would poorly describe the instructions for making a human, as they would be give you the idea of a human being as something merely static, rather than something that is internally dynamic to the highest degree.

Trick #4: Trying to Conceal the Complexities of Human Minds, by Claiming that Human Minds Are Like Animal Minds

Humans are enormously complex not only in their physical bodies, but in their minds. The human mind is its own separate ocean of mental complexity apart from the ocean of physical complexity that is the human body. Darwinists have always had the greatest difficulty in accounting for the subtleties and complexities of human minds and human behavior. Very much of our mental activity seems like something inexplicable under any theory of natural selection. Much of what our minds do (such as mathematical ability, artistic creativity and philosophical reasoning) is of no survival value, and cannot be explained under any reasoning of survival-of-the-fittest or natural selection.

Darwinists have usually dealt with this problem by taking an approach of claiming that mentally humans are like animals. Such a claim is a gigantic example of complexity concealment, a case of trying to cover-up the complexities of the human mind, by sweeping them under the rug. Darwin committed this error most egregiously in a passage of The Descent of Man in which he made the extremely absurd claim that " there is no fundamental difference between man and the higher mammals in their mental faculties." 

Trick #5: Using the Shadow-Speaking Term "Consciousness" To Refer to Human Mentality

A very common trick of modern scientists is to refer to the human mind (an extremely multifaceted and complex reality) by the minimalist term "consciousness,"  which would be a suitable term for describing the mind of an insect. A dictionary defines consciousness as being awake and aware of your surroundings. But human mentality is something vastly more complex and multifaceted than that. So using the term "consciousness" for human mentality is an example of shadow-speaking, language that makes something look like a mere shadow of what it is. 

While the term "problem of consciousness" is often used, what we actually have is not some mere "problem of consciousness" but an extremely large “problem of explaining human mental capabilities and human mental experiences” that is vastly larger than merely explaining consciousness. The problem includes all the following difficulties and many others:
  1. the problem of explaining how humans are able to have abstract ideas;
  2. the problem of explaining how humans are able to store learned information, despite the lack of any detailed theory as to how learned knowledge could ever be translated into neural states or synapse states;
  3. the problem of explaining how humans are able to reliably remember things for more than 50 years, despite extremely rapid protein turnover in synapses, which should prevent brain-based storage of memories for any period of time longer than a few weeks;
  4. the problem of how humans are able to instantly retrieve little accessed information, despite the lack of anything like an addressing system or an indexing system in the brain;
  5. the problem of how humans are able to produce great works of creativity and imagination;
  6. the problem of how humans are able to be conscious at all;
  7. the problem of why humans have such a large variety of paranormal psychic experiences and capabilities such as ESP capabilities that have been well-established by laboratory tests, and near-death experiences that are very common, often occurring when brain activity has shut down;
  8. the problem of how humans have such diverse skills and experiences as mathematical reasoning, moral insight, philosophical reasoning, and refined emotional and spiritual experiences;
  9. the problem of self-hood and personal identity, why it is that we always continue to have the experience of being the same person, rather than just experiencing a bundle of miscellaneous sensations;
  10. the problem of intention and will, how is it that a mind can will particular physical outcomes.
It is therefore an example of a complexity cover-up and concealment for someone to refer to the human mind as merely "consciousness" or to speak as if there is some mere "problem of consciousness" when there is the vastly larger problem of explaining human minds that are so much more than mere consciousness.  Calling a human mind "consciousness" (a good term for describing the mind of a mouse) is like calling a city a bunch of bricks and lumber. 

aspects of human mentality
Our minds are so much more than just "consciousness"

The diagram helps show the stupidity of the approach taken by many of today's thinkers, an approach in which the thinker tries to make his explanation task a million times easier by the trick of describing a mere "problem of consciousness" that needs to be solved.   The human mind and its capabilities and experiences is a reality a million times more than mere "consciousness."  It is an absurd problem misstatement to describe the problem of explaining human minds as a mere problem of explaining consciousness.  The person who makes that mistake is committing a blunder as bad as the person who tries to reduce the problem of explaining the arising of human bodies to a mere "problem of solidity origination." 

complexity of human minds

Trick #6: Trying to Conceal the Complexities of Human Minds, by Denying the Evidence for Psi and Paranormal Abilities

As you can see by reading the 200+ posts here, we have two hundred years of very good observational and experimental evidence for paranormal human abilities such as clairvoyance and ESP, very much of it published in the writings of distinguished scientists and physicians. But the existence of such abilities is senselessly denied by very many of our professors. Denying the reality of psi is essentially a cover-up, a case of sweeping under the rug complexities of the human mind that you would prefer not to deal with it, for the sake of depicting human minds as being much simpler than they are.  

Trick #7: Failing to Describe the Complexity of Typical Protein Molecules or Larger Protein Molecules

When biologists and writers on biology describe protein molecules, they typically tell us that protein molecules have "many" amino acids. But almost never are we given a statement that informs about how complex protein molecules are.  It is very easy to do such a thing. 

The first way to do such a thing is by simply mentioning that the average human protein molecule has about 370 amino acids, and that very many types of human protein molecules consist of thousands of specially arranged amino acids.  Another way to do this by an analogy. If we compare an amino acid to a letter, we can say that the average protein has the information complexity of a well-written paragraph, and that the larger protein molecules have the information complexity of a well-written page of text.  

But we almost never are told such facts. Nine times out of ten a reader will simply be vaguely told that there are "many" amino acids in a a protein. The complexity of protein molecules are almost always hidden from readers, who may go away with the very incorrect idea that a protein molecule consists of only 10 or 20 amino acids. 

Trick #8: Failing to Discuss the Sensitivity of Protein Molecules

There are two ways to get an understanding of how organized and fine-tuned protein molecules are. The first is to learn how many parts they have (typically several hundred amino acid parts). The second is to learn how sensitive such molecules are to small changes, how easy it is to break the functionality of a protein by changing some of its amino acids. Some important papers have been written shedding light on how the functionality of protein molecules tends to be destroyed when only a small percentage of the molecule is changed. One such paper is the paper here, estimating that making a random change in a single amino acid of a protein (most of which have hundreds of amino acids) will have a 34% chance of leading to a protein's "functional inactivation."  Such papers tell us the very important truth that protein molecules are very sensitive to small changes, which means that they are exceptionally fine-tuned and functionally organized. But we almost never hear our professors discuss this extremely relevant truth. 

Trick #9: Failing to Tell Us Protein Molecules Are Very Often Functional Only As a Part of Protein Complexes Involving Multiple Proteins

Protein complexes occur when a protein is not functional unless it combines with one or more other proteins, which act like a team to create a particular effect or do a particular job.  When writing for the general public, our biology authorities conveniently mention as infrequently as they can the extremely relevant fact that a significant fraction of proteins are nonfunctional unless acting as team members inside a protein complex, a fact that makes Darwinian explanations of human biochemistry seem exponentially more improbable.  An example is a recent paper estimating the likelihood of photosynthesis on other planets,  which very misleadingly refers to photosynthesis as being something with "overall simplicity," conveniently failing to mention that photosynthesis requires at least four different protein complexes, making it something that can only be achieved by extremely organized functional arrangements of matter, incredibly unlikely to ever appear by chance of Darwinian processes. 

Trick #10: Not Telling Us How Many Protein Molecules Are in a Typical Cell

How many protein molecules are in a typical cell? I doubt whether one high-school graduate in 10 could correctly answer this question within a factor of 100. Biology's concealment aces are good about hiding this important information from us.  The answer (about 40 million) almost never appears in print.  We do sometimes hear mention of the fact that the human body contains more than 20,000 different types of protein molecules (each a separate complex invention), but not nearly as often as we should. 

Trick #11: Misleading "Cell Types" Diagrams Suggesting There Are Only a Few Cell Types

How many different cell types are there in the human body? Our biologists frequently publish "cell types" diagrams listing only a few types of cells. Such charts cause people to think there are maybe 5 or 10 types of cells in the human body.  The actual number of cell types in the human body is something like 200. When did we ever see a diagram suggesting this reality?

Trick #12: Describing Human Bodies As If They Were Static Things, Ignoring the Vast Internal Dynamism of Organisms and Cells

Inside the human body and each of its cells there are a thousand simultaneous choreographies of activity.  The physical structure of a cell is as complex as the physical structure of a factory, and the internal activity inside a cell is as complex as all of the many types of worker activities going on inside a large factory. Such a very important reality is almost never discussed by our professors when writing for the public. Such people love to describe cells as "building blocks," as if they were static things like bricks or cinder blocks.  

Trick #13: Failing to Describe the Hierarchical Organization of Human Bodies

The organization of organisms is extremely hierarchical.  Subatomic particles are organized into atoms, which are organized into relatively simple molecules such as amino acids, which are organized into complex molecules such as proteins, which are organized into more complex units such as protein complexes and cell structures called organelles, which are organized into cells, which are organized into tissues, which are organized into organs, which are organized into organ systems, which (along with skeletal systems) are organized into organisms.  You will virtually never read a sentence like the previous one in something written by a professor, and we may wonder that this is because a sentence like that one makes too clear the extremely hierarchical organization of organisms, something many of our biologists rather seem to want us not to know about. 

Trick #14: Making It Sound As If Particular Organs Accomplish What Actually Requires Organ Systems and Fine-Tuned Biochemistry

In discussions involving biological origins, our professors often speak as if an eye will give you vision or a heart will give you blood circulation, or a stomach will give you food digestion.  But nobody sees just by eyes; they see by means of extremely complicated vision systems that require eyes, optic nerves, parts of the brain involved in vision, and very complex protein molecules.  And hearts are useless unless they are working with extremely complex cardiovascular systems that include lungs, veins, arteries, capillaries and very complex biochemistry.  And nobody digests food simply through a stomach, but through an extremely complicated digestive system consisting of many physical parts and very complex biochemistry. Our professors do an extremely poor job of explaining that things get done in organisms only when there are extremely complex systems consisting of many diverse parts working like a team to accomplish a particular effect. 

Trick #15: Making Scarce Mention of the Countless Different Types of Incredibly Fine-Tuned Biochemistry Needed for Organismic Function

Everywhere biological functionality requires exquisitely fine-tuned biochemistry. But we rarely hear about that in the articles and books of professors written for the general public.  An example of such fine-tuned biochemistry is the biochemistry involved in vision, which a biochemistry textbook describes like this:
  1. Light-absorption converts 11-cis retinal to all-trans-retinal, activating rhodopsin.
  2. Activated rhodopsin catalyzes replacement of GDP by GTP on transducin (T), which then disassociates into Ta-GTP and Tby.
  3. Ta-GTP activates cGMP phosphodiesterase (PDE) by binding and removing its inhibitory subunit (I).
  4. Active PDE reduces [cGMP] to below the level needed to keep cation channels open.
  5. Cation channels close, preventing influx of Na+ and Ca2+; membrane is hyperpolarized. This signal passes to the brain.
  6. Continued efflux of Ca2+ through the Na+-Ca2+ exchanger reduces cytosolic [Ca2+].
  7. Reduction of [CA2+] activates guanylyl cyclase (CG) and inhibits PDE; [cGMP] rises toward “dark” level, reopening cation channels and returning Vm to prestimulus level.
  8. Rhodopsin kinase (RK) phosphorylates “bleached” rhodopsin; low [Ca2+] and recoverin (Recov) stimulate this reaction. Arrestin (Arr) binds phosphorylated carboxyl terminus, reactivating rhodopsin.
  9. Slowly, arrestin dissociates, rhodopsin is dephosphorylated, and all-trans-retinal is replaced with 11-cis-retinal. Rhodopsin is ready for another phototransduction cycle.
We hear no mention of such requirements in typical discussions of the origin of vision, nor do we hear a discussion of how vision requires certain protein molecules consisting of hundreds of parts arranged in just the right way.  Instead, our professors often speak as if vision could have kind of got started if something very simple existed.  Such insinuations are absurdly false. 

Such biochemical requirements are all over the place in biology. In general, any physical function of a body requires a vast amount of enormously complicated biochemistry which has to be just right. But you would hardly know such a thing from reading a typical article or book written by a professor.  The mountainous fine-tuned biochemistry complexity of every physical operation of living things is rarely mentioned, just as if our professors were trying to portray organisms as a thousand times simpler and less organized than they are.  

Darwinism seems to be of very little value in explaining such biochemistry. For example, the thirtieth edition of Harper's Illustrated Biochemistry is an 800-page textbook describing cells, genes, enzymes, proteins, metabolism, hormones, and biochemistry in the greatest detail, with abundant illustrations. The book makes no mention of Darwin, no mention of natural selection, and only a single mention of evolution, on a page talking only about whether evolution had anything to do with limited lifespans.

In papers and textbooks professors may accurately describe the complexities of humans, but so often in books and articles for the public such professors use sentences that can be compared to crude cartoon sketches.   Human minds have oceanic depths of complexity, and human bodies have oceanic depths of organization. But very often it is as if reductionist shrink-speaking professors describe such oceanic realities as if they were crummy little puddles. 

oversimplification by scientists