Thursday, September 25, 2025

Consciousness Shadow-Speaking Is Only a Fraction of Materialism's Complexity Coverup

 Let us imagine someone showed you some cards on a table in his backyard, some cards that had been arranged into something that looked like a house of cards. Suppose the person tried to persuade you that no one had purposefully arranged the cards into such a house-like structure, and that the structure had appeared because of purely random, accidental effects (such as say, friction or random wind gusts). The more complex such a house of cards was, the less likely you would be to believe such an explanation.

If the person showed you a “house of cards” consisting of only one card leaning against another card to make an upside-down “V” shape, you might easily be willing to believe the person's theory of accidental construction. But suppose the person showed you a “house of cards” consisting of twenty cards. You then would be vastly less likely to believe the person's theory of accidental construction. If the person showed you a “house of cards” consisting of 50 cards, like the one below, you would never accept any theory that such an intricate and hard-to-achieve arrangement had occurred by chance.


Speaking more generally, I can evoke a general rule: what I can call the first rule of accidental construction.

The first rule of accidental constructionthe credibility of any claim that an impressively organized final result was accidentally achieved is inversely proportional to the number of parts that had to be well-arranged to achieve such a result, and the amount of organization needed to achieve such a result.

Because of this general rule, the rule that the more functionally organized something is the less likely it arose accidentally, there is a very important relation between the degree of organization and complexity in biological organisms and the credibility of Darwin's theory of natural biological origins. The relation is that the credibility of Darwin's theory of natural biological origins is inversely proportional to the degree of organization and functional complexity in biological organisms. The more organized and functionally complex that biological organisms are, the less likely that they might have appeared because of any accidental process.

Biological organisms have enormously high levels of organization. We know how to put together piece by piece aircraft carriers equipped with all their jets, but there is no team of scientists that could ever put together from scratch a living human body, by a molecule by molecule arrangement of parts.  The human body has a more impressive degree of organization and complexity than any machines humans have ever manufactured. So the advocates of Darwinism have a tough situation. If they realistically depict organisms such as humans as being as functionally complex and hierarchically organized as they are, all attempts to sell Darwinism will be undermined. So the asvocates of Darwinism routinely attempt to portray organisms and their parts as being vastly less complex than they are.

Again and again (particularly when speaking to the general public), mainstream biologists will give us kind of kindergarten sketches of biological life, in which organisms or their parts are depicted as being enormously simpler than they are. They use a series of tricks by which people may be fooled into thinking that organisms and their parts are a hundred times simpler or a thousand times simpler or a million times simpler than they are.

To perform such a concealment, biologists very often engage in what I call shrink-speaking, which is misleadingly describing something as if it were vastly simpler than it is.  A person who describes the United States of America as "just a bunch of buildings" is engaging in shrink-speaking, as does a scientist who refers to you as "a bunch of chemical compounds." The same shrink-speaking would occur if someone described the volumes of a public library as "just some ink marks on paper."

Below are some of the tricks that are used as part of this gigantic complexity concealment. The tricks are most commonly used when professors are writing books for the general public or articles designed to be read mainly by the general public. Conversely, inside scientific papers rarely read by the public, professors often discuss the vast amount of organization and functional complexity of living things. 

Trick #1: The Frequent Mention of “Building Blocks of Life”

Scientists and science writers have long claimed that “building blocks of life” were produced by certain experiments, although the claims made along these lines are very erroneous and misleading. More baloney has been written about the Miller-Urey experiment (an experiment claimed to have produced “building blocks of life”) than almost any other scientific topic.

Without reviewing the huge number of misstatements that have been made about origin-of-life research, we can merely consider how utterly misleading is the very phase “building blocks of life.” The very term suggests that the simplest life would be something very simple. When we think of what is made from building blocks, we can think of something as simple as a wall or a simple house made of cinder blocks or bricks. But all cells are incomparably more complex than a simple house.

The building components of cells are organelles, which make up even the simplest prokaryotic cells. The building components of organelles are proteins and protein complexes, and the building components of proteins are mere amino acids. Whenever anyone describes an amino acid or a nucleobase as a “building block of life.” that person is misrepresenting the complexity of life. An amino acid is merely a building component of a building component (a protein) of a building component  (an organelle) of a cell.

Trick #2: Misleading Cell Diagrams

A staple of biological instruction materials is a diagram showing the contents of a cell. Such diagrams are usually profoundly misleading, because they make it look like a cell is hundreds or thousands of times simpler than it actually is.

Specifically:

  • A cell diagram will typically depict a cell as having only a few mitochondria, but cells typically have many thousands of mitochondria, as many as a million.
  • A cell diagram will typically depict a cell as having only a few lysosomes, but cells typically have hundreds of lysosomes.
  • A cell diagram will typically depict a cell as having only a few ribosomes, but a cell may have up to 10 million ribosomes.
  • A cell diagram will typically depict one or a few stacks of a Golgi appartus, each with only a few cisternae, but a cell will typically have between 10 and 20 stacks, each having as many as 60 cisternae.
  • Cell diagrams create in the mind the idea of a cell as a static thing, when actual cells are centers of continuous activity, like some active factory or a building that is undergoing continous construction and remodeling. 
misleading cell diagram

Trick #3: Claiming that a Human Could Be Specified by a Mere “Blueprint” or “Recipe,” and Claiming DNA Has Such a Thing

The idea that DNA is a blueprint or recipe for making a human being is a claim that is both false and profoundly misleading, giving people a totally incorrect idea about the complexity of human being. It is false that DNA has any such thing as a recipe or blueprint for making a human. DNA contains only low-level chemical information such as information about the amino acids that make up proteins. DNA does not contain high-level structural information. DNA does not specify the overall body plan of a human, does not specify how to make any organ system or organ of a human, and does not even specify how to make any of the 200 types of cells in the human body. See this post for the statements of 18 science authorities telling you that DNA is not a blueprint or a recipe for making organisms.

Besides giving us an utterly false idea about the contents of DNA, claims such as “DNA is a blueprint for making humans” or “DNA is a recipe for making humans” create false ideas about the complexity of human beings. A blueprint is a large sheet of paper for doing a relatively simple construction job. A recipe is a single page for doing a relatively simple food preparation job. So whenever we hear people say something “DNA is a blueprint for making a human” or “DNA is a recipe for making a human,” we think that the construction of a human is a relatively simple affair. In reality, a human body is many thousands or millions of times too complex to be constructed from any blueprint or recipe.

If you were to give an analogy that would properly convey how complex would be the instructions needed for building a human, you might refer to something like a “a long bookshelf filled with many volumes of construction blueprints" or a “long bookshelf filled with recipe books.” But even such analogies would poorly describe the instructions for making a human, as they would be give you the idea of a human being as something merely static, rather than something that is internally dynamic to the highest degree.

Trick #4: Trying to Conceal the Complexities of Human Minds, by Claiming that Human Minds Are Like Animal Minds

Humans are enormously complex not only in their physical bodies, but in their minds. The human mind is its own separate ocean of mental complexity apart from the ocean of physical complexity that is the human body. Darwinists have always had the greatest difficulty in accounting for the subtleties and complexities of human minds and human behavior. Very much of our mental activity seems like something inexplicable under any theory of natural selection. Much of what our minds do (such as mathematical ability, artistic creativity and philosophical reasoning) is of no survival value, and cannot be explained under any reasoning of survival-of-the-fittest or natural selection.

Darwinists have usually dealt with this problem by taking an approach of claiming that mentally humans are like animals. Such a claim is a gigantic example of complexity concealment, a case of trying to cover-up the complexities of the human mind, by sweeping them under the rug. Darwin committed this error most egregiously in a passage of The Descent of Man in which he made the extremely absurd claim that " there is no fundamental difference between man and the higher mammals in their mental faculties." 

Trick #5: Using the Shadow-Speaking Term "Consciousness" To Refer to Human Mentality

A very common trick of modern scientists is to refer to the human mind (an extremely multifaceted and complex reality) by the minimalist term "consciousness,"  which would be a suitable term for describing the mind of an insect. A dictionary defines consciousness as being awake and aware of your surroundings. But human mentality is something vastly more complex and multifaceted than that. So using the term "consciousness" for human mentality is an example of shadow-speaking, language that makes something look like a mere shadow of what it is. 

While the term "problem of consciousness" is often used, what we actually have is not some mere "problem of consciousness" but an extremely large “problem of explaining human mental capabilities and human mental experiences” that is vastly larger than merely explaining consciousness. The problem includes all the following difficulties and many others:
  1. the problem of explaining how humans are able to have abstract ideas;
  2. the problem of explaining how humans are able to store learned information, despite the lack of any detailed theory as to how learned knowledge could ever be translated into neural states or synapse states;
  3. the problem of explaining how humans are able to reliably remember things for more than 50 years, despite extremely rapid protein turnover in synapses, which should prevent brain-based storage of memories for any period of time longer than a few weeks;
  4. the problem of how humans are able to instantly retrieve little accessed information, despite the lack of anything like an addressing system or an indexing system in the brain;
  5. the problem of how humans are able to produce great works of creativity and imagination;
  6. the problem of how humans are able to be conscious at all;
  7. the problem of why humans have such a large variety of paranormal psychic experiences and capabilities such as ESP capabilities that have been well-established by laboratory tests, and near-death experiences that are very common, often occurring when brain activity has shut down;
  8. the problem of how humans have such diverse skills and experiences as mathematical reasoning, moral insight, philosophical reasoning, and refined emotional and spiritual experiences;
  9. the problem of self-hood and personal identity, why it is that we always continue to have the experience of being the same person, rather than just experiencing a bundle of miscellaneous sensations;
  10. the problem of intention and will, how is it that a mind can will particular physical outcomes.
It is therefore an example of a complexity cover-up and concealment for someone to refer to the human mind as merely "consciousness" or to speak as if there is some mere "problem of consciousness" when there is the vastly larger problem of explaining human minds that are so much more than mere consciousness.  Calling a human mind "consciousness" (a good term for describing the mind of a mouse) is like calling a city a bunch of bricks and lumber. 

aspects of human mentality
Our minds are so much more than just "consciousness"

The diagram helps show the stupidity of the approach taken by many of today's thinkers, an approach in which the thinker tries to make his explanation task a million times easier by the trick of describing a mere "problem of consciousness" that needs to be solved.   The human mind and its capabilities and experiences is a reality a million times more than mere "consciousness."  It is an absurd problem misstatement to describe the problem of explaining human minds as a mere problem of explaining consciousness.  The person who makes that mistake is committing a blunder as bad as the person who tries to reduce the problem of explaining the arising of human bodies to a mere "problem of solidity origination." 

complexity of human minds

Trick #6: Trying to Conceal the Complexities of Human Minds, by Denying the Evidence for Psi and Paranormal Abilities

As you can see by reading the 200+ posts here, we have two hundred years of very good observational and experimental evidence for paranormal human abilities such as clairvoyance and ESP, very much of it published in the writings of distinguished scientists and physicians. But the existence of such abilities is senselessly denied by very many of our professors. Denying the reality of psi is essentially a cover-up, a case of sweeping under the rug complexities of the human mind that you would prefer not to deal with it, for the sake of depicting human minds as being much simpler than they are.  

Trick #7: Failing to Describe the Complexity of Typical Protein Molecules or Larger Protein Molecules

When biologists and writers on biology describe protein molecules, they typically tell us that protein molecules have "many" amino acids. But almost never are we given a statement that informs about how complex protein molecules are.  It is very easy to do such a thing. 

The first way to do such a thing is by simply mentioning that the average human protein molecule has about 370 amino acids, and that very many types of human protein molecules consist of thousands of specially arranged amino acids.  Another way to do this by an analogy. If we compare an amino acid to a letter, we can say that the average protein has the information complexity of a well-written paragraph, and that the larger protein molecules have the information complexity of a well-written page of text.  

But we almost never are told such facts. Nine times out of ten a reader will simply be vaguely told that there are "many" amino acids in a a protein. The complexity of protein molecules are almost always hidden from readers, who may go away with the very incorrect idea that a protein molecule consists of only 10 or 20 amino acids. 

Trick #8: Failing to Discuss the Sensitivity of Protein Molecules

There are two ways to get an understanding of how organized and fine-tuned protein molecules are. The first is to learn how many parts they have (typically several hundred amino acid parts). The second is to learn how sensitive such molecules are to small changes, how easy it is to break the functionality of a protein by changing some of its amino acids. Some important papers have been written shedding light on how the functionality of protein molecules tends to be destroyed when only a small percentage of the molecule is changed. One such paper is the paper here, estimating that making a random change in a single amino acid of a protein (most of which have hundreds of amino acids) will have a 34% chance of leading to a protein's "functional inactivation."  Such papers tell us the very important truth that protein molecules are very sensitive to small changes, which means that they are exceptionally fine-tuned and functionally organized. But we almost never hear our professors discuss this extremely relevant truth. 

Trick #9: Failing to Tell Us Protein Molecules Are Very Often Functional Only As a Part of Protein Complexes Involving Multiple Proteins

Protein complexes occur when a protein is not functional unless it combines with one or more other proteins, which act like a team to create a particular effect or do a particular job.  When writing for the general public, our biology authorities conveniently mention as infrequently as they can the extremely relevant fact that a significant fraction of proteins are nonfunctional unless acting as team members inside a protein complex, a fact that makes Darwinian explanations of human biochemistry seem exponentially more improbable.  An example is a recent paper estimating the likelihood of photosynthesis on other planets,  which very misleadingly refers to photosynthesis as being something with "overall simplicity," conveniently failing to mention that photosynthesis requires at least four different protein complexes, making it something that can only be achieved by extremely organized functional arrangements of matter, incredibly unlikely to ever appear by chance of Darwinian processes. 

Trick #10: Not Telling Us How Many Protein Molecules Are in a Typical Cell

How many protein molecules are in a typical cell? I doubt whether one high-school graduate in 10 could correctly answer this question within a factor of 100. Biology's concealment aces are good about hiding this important information from us.  The answer (about 40 million) almost never appears in print.  We do sometimes hear mention of the fact that the human body contains more than 20,000 different types of protein molecules (each a separate complex invention), but not nearly as often as we should. 

Trick #11: Misleading "Cell Types" Diagrams Suggesting There Are Only a Few Cell Types

How many different cell types are there in the human body? Our biologists frequently publish "cell types" diagrams listing only a few types of cells. Such charts cause people to think there are maybe 5 or 10 types of cells in the human body.  The actual number of cell types in the human body is something like 200. When did we ever see a diagram suggesting this reality?

Trick #12: Describing Human Bodies As If They Were Static Things, Ignoring the Vast Internal Dynamism of Organisms and Cells

Inside the human body and each of its cells there are a thousand simultaneous choreographies of activity.  The physical structure of a cell is as complex as the physical structure of a factory, and the internal activity inside a cell is as complex as all of the many types of worker activities going on inside a large factory. Such a very important reality is almost never discussed by our professors when writing for the public. Such people love to describe cells as "building blocks," as if they were static things like bricks or cinder blocks.  

Trick #13: Failing to Describe the Hierarchical Organization of Human Bodies

The organization of organisms is extremely hierarchical.  Subatomic particles are organized into atoms, which are organized into relatively simple molecules such as amino acids, which are organized into complex molecules such as proteins, which are organized into more complex units such as protein complexes and cell structures called organelles, which are organized into cells, which are organized into tissues, which are organized into organs, which are organized into organ systems, which (along with skeletal systems) are organized into organisms.  You will virtually never read a sentence like the previous one in something written by a professor, and we may wonder that this is because a sentence like that one makes too clear the extremely hierarchical organization of organisms, something many of our biologists rather seem to want us not to know about. 

Trick #14: Making It Sound As If Particular Organs Accomplish What Actually Requires Organ Systems and Fine-Tuned Biochemistry

In discussions involving biological origins, our professors often speak as if an eye will give you vision or a heart will give you blood circulation, or a stomach will give you food digestion.  But nobody sees just by eyes; they see by means of extremely complicated vision systems that require eyes, optic nerves, parts of the brain involved in vision, and very complex protein molecules.  And hearts are useless unless they are working with extremely complex cardiovascular systems that include lungs, veins, arteries, capillaries and very complex biochemistry.  And nobody digests food simply through a stomach, but through an extremely complicated digestive system consisting of many physical parts and very complex biochemistry. Our professors do an extremely poor job of explaining that things get done in organisms only when there are extremely complex systems consisting of many diverse parts working like a team to accomplish a particular effect. 

Trick #15: Making Scarce Mention of the Countless Different Types of Incredibly Fine-Tuned Biochemistry Needed for Organismic Function

Everywhere biological functionality requires exquisitely fine-tuned biochemistry. But we rarely hear about that in the articles and books of professors written for the general public.  An example of such fine-tuned biochemistry is the biochemistry involved in vision, which a biochemistry textbook describes like this:
  1. Light-absorption converts 11-cis retinal to all-trans-retinal, activating rhodopsin.
  2. Activated rhodopsin catalyzes replacement of GDP by GTP on transducin (T), which then disassociates into Ta-GTP and Tby.
  3. Ta-GTP activates cGMP phosphodiesterase (PDE) by binding and removing its inhibitory subunit (I).
  4. Active PDE reduces [cGMP] to below the level needed to keep cation channels open.
  5. Cation channels close, preventing influx of Na+ and Ca2+; membrane is hyperpolarized. This signal passes to the brain.
  6. Continued efflux of Ca2+ through the Na+-Ca2+ exchanger reduces cytosolic [Ca2+].
  7. Reduction of [CA2+] activates guanylyl cyclase (CG) and inhibits PDE; [cGMP] rises toward “dark” level, reopening cation channels and returning Vm to prestimulus level.
  8. Rhodopsin kinase (RK) phosphorylates “bleached” rhodopsin; low [Ca2+] and recoverin (Recov) stimulate this reaction. Arrestin (Arr) binds phosphorylated carboxyl terminus, reactivating rhodopsin.
  9. Slowly, arrestin dissociates, rhodopsin is dephosphorylated, and all-trans-retinal is replaced with 11-cis-retinal. Rhodopsin is ready for another phototransduction cycle.
We hear no mention of such requirements in typical discussions of the origin of vision, nor do we hear a discussion of how vision requires certain protein molecules consisting of hundreds of parts arranged in just the right way.  Instead, our professors often speak as if vision could have kind of got started if something very simple existed.  Such insinuations are absurdly false. 

Such biochemical requirements are all over the place in biology. In general, any physical function of a body requires a vast amount of enormously complicated biochemistry which has to be just right. But you would hardly know such a thing from reading a typical article or book written by a professor.  The mountainous fine-tuned biochemistry complexity of every physical operation of living things is rarely mentioned, just as if our professors were trying to portray organisms as a thousand times simpler and less organized than they are.  

Darwinism seems to be of very little value in explaining such biochemistry. For example, the thirtieth edition of Harper's Illustrated Biochemistry is an 800-page textbook describing cells, genes, enzymes, proteins, metabolism, hormones, and biochemistry in the greatest detail, with abundant illustrations. The book makes no mention of Darwin, no mention of natural selection, and only a single mention of evolution, on a page talking only about whether evolution had anything to do with limited lifespans.

In papers and textbooks professors may accurately describe the complexities of humans, but so often in books and articles for the public such professors use sentences that can be compared to crude cartoon sketches.   Human minds have oceanic depths of complexity, and human bodies have oceanic depths of organization. But very often it is as if reductionist shrink-speaking professors describe such oceanic realities as if they were crummy little puddles. 

oversimplification by scientists

Monday, September 22, 2025

Neuroscientists Misspeak Badly When They Refer to "Representations"

Claims by neuroscientists that they have found "representations" in the brain (other than genetic representations) are examples of what very abundantly exists in biology: groundless achievement legends. There is no robust evidence for any such representations. 

Excluding the genetic information stored in DNA and its genes, there are simply no physical signs of learned information stored in a brain in any kind of organized format that resembles some kind of system of representation. If learned information were stored in a brain, it would tend to have an easily detected hallmark: the hallmark of token repetition.  There would be some system of tokens, each of which would represent something, perhaps a sound or a color pixel or a letter. There would be very many repetitions of different types of symbolic tokens.   Some examples of tokens are given below. Other examples of tokens include nucleotide base pairs (which in particular combinations of 3 base pairs represent particular amino acids), and also coins and bills (some particular combination of coins and bills can represent some particular amount of wealth). 

symbolic tokens

Other than the nucleotide base pair triple combinations that represent mere low-level chemical information such as amino acids, something found in neurons and many other types of cells outside of the brain, there is no sign at all of any repetition of symbolic tokens in the brain. Except for genetic information which is merely low-level chemical information, we can find none of the hallmarks of symbolic information (the repetition of symbolic tokens) inside the brain. No one has ever found anything that looks like traces or remnants of learned information by studying brain tissue. If you cut off some piece of brain tissue when someone dies, and place it under the most powerful electron microscope, you will never find any evidence that such tissue stored information learned during a lifetime, and you will never be able to figure out what a person learned from studying such tissue.  This is one reason why scientists and law enforcement officials never bother to preserve the brains of dead people in hopes of learning something about what such people experienced during their lives, or what they thought or believed, or what deeds they committed.    

But despite their complete failure to find any robust evidence of non-genetic representations in the brain, neuroscientists often make groundless boasts of having discovered representations in brains. What is going on is two types of things:

(1) Misspeaking and language abuse by neuroscientists, in which they misleadingly use the term "representations" to refer to mere fleeting blips of brain activity that may properly be called "neural correlates" but not honestly called "representations."

(2) Pareidolia, people reporting seeing something that is not there, after wishfully analyzing large amounts of ambiguous and hazy data. It's like someone eagerly analyzing his toast every day for years, looking for something that looks like the face of Jesus, and eventually reporting he saw something that looked to him like the face of Jesus.  It's also like someone walking in many different forests, eagerly looking for faces on trees, and occasionally reporting a success. 

pareidolia

Let's consider the first of these two things. You are using language in a misleading way when you refer to a mere correlate as a representation.  I will give an example. Imagine you do a study in which you photograph faces of American football fans when their team scores a touchdown. You might be able to detect a correlation between the times that a touchdown was scored and particular expressions of fans. You may find that football fans are more likely to look like this just after a touchdown was scored. 


But you would be speaking in a misleading way if you called such an expression a "representation" of a touchdown. The expression is merely one that is a correlate of a touchdown: it is an expression more often occurring when a touchdown occurs (and also when any successful play occurs, such as a long run or a long pass). 

There is in American football an actual representation of a scoring event: the sign that a referee makes when he places his two arms up in the air. So the sign below is an actual representation of a football touchdown or a field goal. 


It is a big mistake to be using the term "representation" when referring to something that is merely a correlate. This is mistake that today's neuroscientists keep making. 

Let us consider some EEG readings or brain scans that may be taken of some organism when the organisms see something and then respond to that thing. Visual perception causes greater activity in the visual cortex of that brain. Such an increase is merely a correlate of the seeing of something, not a representation of what was seen.  When presented with some stimulus such as a piece of cheese put in front of it, a mouse may move its muscles to move toward the food. Muscle movements produce spikes in EEG readings. But if you pick up a spike in an EEG reading when a mouse moves toward some cheese, that is not correctly called a representation of the cheese. It is instead merely a neural correlate of responding to the cheese. 

The paper "Brain-wide representations of prior information in mouse decision-making" is an example of a recent science paper misspeaking  when using the term "representation." No representations were actually found. All that was observed were neural correlates. The authors confess, "It remains unclear where and how prior knowledge is represented in the brain."  The truth is that no evidence has ever been found of learned knowledge stored anywhere in the brain of any organism. 

The paper "A brain-wide map of neural activity during complex behaviour" is is another example of a recent science paper misspeaking  when using the term "representation." The paper studied mice with implanted electrodes. The authors have section titles such as "Representation of visual stimulus," "Representation of choice," "Representation of feedback," and "Representation of wheel movement." All of these uses of "representation" are illegitimate. 

Let's look at each of these sections:

"Representation of visual stimulus": It soon becomes clear that all that the authors have picked up is fleeting correlates of something a mouse has seen. We read, "a decoding analysis based on the first 100 ms after stimulus onset revealed correlates of the visual stimulus side in many cortical and subcortical regions." The 100 ms refers to a mere tenth of a second. 
"Representation of choice": All that the authors have picked up is fleeting neural correlates of muscle movements associated with the choice. We have another observation window of 100 milliseconds, merely a tenth of a second.  The analysis going on is correctly described as "neural correlates of muscle movement associated with a choice," but not correctly described as "representation of choice." 
"Representation of feedback": All that the authors have picked up is fleeting neural correlates of mice getting some reward, within an observation window of a fifth of a second. This is no actual "representation of feedback." 
"Representation of wheel movement": All that the authors have picked up is fleeting neural correlates of muscle movements. Such neural correlates are not correctly described as representations. 

How in theory might you get actual evidence of a brain making a choice? It could not be through any research involving mice. It could only be done with humans. It might work rather like this:

Humans would have their brains scanned, or be connected to EEG devices allowing their brain waves to be read. The human subjects would be offered a choice between two different foods. The subjects would be instructed to close their eyes and remain motionless, as a countdown timer ticked down: 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0. The subjects would be instructed that when the timer reached 0, they should silently make a choice in their minds as to which of the foods they would select, and then remain motionless for another 30 seconds.  The interval of the countdown could be varied, with it sometimes being 20 seconds, sometimes 20 seconds, and sometimes 10 seconds. 

The data would then be analyzed by blinded analysts, who did not know what moment corresponded to the timer reaching 0, and did not know which choice was made.  The analyst would attempt to figure out which of the two choices was made, purely from the brain data. The analyst would also attempt to figure out at which second the choice was made, purely from the brain data. 

The experiment would fail. Excluding the neural correlates of muscle movements, brains do not show any sign of being involved in decisions. When a person makes a decision without moving his muscles or changing his facial expression, there is no neural correlate of this action. Brains have no actual representations of decisions, and no actual representations of beliefs, or anything someone has learned. 

Friday, September 19, 2025

Parapsychologists Should Not Imitate Bad Research Habits of Neuroscientists

 In this post I will criticize an experiment attempting to show the existence of a mind-over-matter effect. My criticism does not at all arise from any disbelief in the existence of psi phenomena such as telepathy and ESP. I have written many posts presenting the evidence for ESP and clairvoyance, which you can read by reading the series of posts here and here, and continuing to press Older Posts at the bottom right. Later in this post I will give a bullet list linking to some of the best evidence for paranormal psi effects. I regard the evidence for ESP (telepathy) and clairvoyance as being very good. 

The experiment I refer to is described in the paper "Enhanced mind-matter interactions following rTMS induced frontal lobe inhibition" which you can read here. The authors start out by giving some speculative reasons for thinking that the brain may suppress or inhibit paranormal powers of humans. My concern that the authors have gone astray begins when I read the passage below:

"Based on our findings in the two individuals with damage to their frontal lobes, we adopted a new approach to help determine whether the left medial middle frontal region of brain acts [as] a filter to inhibit psi. This was the use of repetitive transcranial magnetic stimulation (rTMS) to induce reversible brain lesions in the left medial middle frontal region in healthy participants."

Inducing "reversible brain lesions?" That sounds like dangerous fiddling that might go wrong and cause permanent damage to the brain.  On this blog I have strongly criticized neuroscience experiments that may endanger human subjects, in the series of posts you can read here  and here. I must be consistent here, and criticize just as strongly parapsychologists using similar techniques. There are very many entirely safe ways to test whether humans have paranormal psychical abilities, and I describe one of them later in this post. So I don't understand why anyone would feel justified in zapping someone's brain in an attempt to provide evidence for such abilities.  There's no need for such high-tech gimmickry. Abilities such as mind-over-matter and telepathy can be tested in simple ways that do not have any reliance on technology.  I would strongly criticize any conventional neuroscience experiment that claimed to be inducing "reversible brain legions."  I must just as strongly attack this experiment for doing such fiddling. 

The paper describing the experiment tells us about some experimental setup in which people were asked to change the output of a random number generator generating numbers of  0 and 1, by looking at a computer screen showing an arrow, and willing the arrow to move in a particular direction. The idea was that if there were a lot more 0 numbers than 1 numbers, an arrow on the middle of a screen would move towards one end of the screen; and  if there were a lot more 1 numbers than 0 numbers, an arrow on the middle of a screen would move towards another end of the screen. 

We have some paragraphs describing the way the data was analyzed, and the results.  Here the paper fails to follow a golden rule for parapsychology experiments.  That rule is: keep things as simple as possible, so that readers will well-understand any impressive results achieved, and so that readers will have a minimum tendency to suspect that some statistical sleight-of-hand is occurring. The importance of this rule cannot be over-emphasized.  Skeptics and materialist scientists start out by being skeptical of claims of paranormal abilities.  The more complex your experimental setup and data analysis, the easier it will be for them to ignore or belittle your results. 

There is a golden rule of  computer programming, the rule of KISS, which stands for Keep It Simple, Stupid.  People doing parapsychology experiments should follow the same rule. The more complex the experimental setup, the easier it will be for skeptics to suggest that some kind of "smoke and mirrors" was going on. 

The authors of the paper "Enhanced mind-matter interactions following rTMS induced frontal lobe inhibition" have violated this rule. They give us some paragraphs of statistical gobbledygook about their results, and fail to communicate an impressive result in any way that even 10% of their readers will be able to understand. I myself am unable to understand any result in the paper providing impressive evidence of a mind over matter effect. If there is any such result, the authors have failed to effectively describe it in a way the general public can understand. 

The authors state this:

"As predicted by our a priori hypothesis, we demonstrated that healthy participants with reversible rTMS induced lesions targeting the left medial middle frontal brain region showed larger right intention effects on a mind-matter interaction task compared to healthy participants without rTMS induced lesions. This significant effect was found only after we applied a post hoc weighting procedure aligned with our overarching hypothesis."

It sounds like the raw results of their experiment failed to show a significant effect, but that by "calling an audible" after gathering data to try to "save the day" by introducing some unplanned statistical trick, the authors were able to claim a significant result.  This sounds like the kind of sleazy maneuver that neuroscientists so often use to try to gin up a result showing "statistical significance."   Instead of acting like badly behaving neuroscientists, our authors should have used pre-registration (also called the use of a registered report). The authors should have published a very exact plan on how data would be gathered and analyzed, before gathering any data. They then should have stuck to such a plan.  If this resulted in a non-significant effect or null result, they should have called that result a non-significant effect or null result. 

Parapsychologists should not be aping the bad habits of neuroscientists, whether it be zapping brains in a potentially dangerous way, or following "keep torturing the data until it confesses" tactics.  Parapsychologists should be following experimental best practices. 

The experimental evidence for telepathy (also called extra-sensory perception or ESP) is very good. We have almost two hundred years of compelling evidence for the phenomenon of clairvoyance, a type of extrasensory perception occurring when a person is asked to describe something he cannot see and does not know about. It is not correct that serious study of this topic began about 1882 with the founding of the Society for Psychical Research in England, as often claimed. Serious rigorous investigation of the topic of clairvoyance dates back as far as 1825, with the 1825-1831 report of the Royal Academy of Medicine finding in favor of clairvoyance. Serious scholarly investigation of clairvoyance occurred many times between 1825 and 1882.  Such investigations often involved subjects who were hypnotized, with many investigators reporting clairvoyance from hypnotized subjects or subjects who were in a trance.  Experimental investigation of telepathy occurred abundantly in the twentieth century, with many university investigators such as Joseph Rhine of Duke University reporting spectacular successes. 

You can read up about some of the evidence for such things by reading some of my posts and free online books below:

There is a simple way for you to test this subject yourself, by doing quick tests with your friends and family members. I will now describe a quick and simple way of doing such tests that I have  found to be highly successful, as I report in another post here. I have no idea whether you will get similar success yourself, but I would not be surprised if you do. Below are some suggestions:

(1) Test ideally using family members or close friends.  I don't actually have any data showing that tests of this type are more likely to be successful using family members or close friends, but I can simply report that I have had much success testing with family members.

(2) Ask the family member or friend to guess some unusual thing that you have seen with your eyes or seen in a dream.  Announce this simply as a guessing game, rather than some ESP or telepathy test. For example, you might say something like, "I saw something unusual today -- I'll give you four guesses to guess what it was."  Or you might say, "I dreamed of something I rarely dream of -- I'll give you four guesses to guess what it was." 

(3) Do not give any clues about your guess target, or give only a very weak clue. Your ESP test will be trying to find some case of a guess matching a guess target, with such a thing being extremely improbable.  You will undermine such an attempt if you give any good clue, such as "I'm thinking of an animal" or "I'm thinking of something in our house." If you give a clue, give only a very weak one such as "I saw something unusual on my walk today, can you guess what it was," or "I had a dream about something I rarely dream of, can you guess what it was." 

(4) Be sure to suggest that the person try three or four guesses rather than a single guess. I have noticed a strong "warm up" effect when occasionally trying tests like this. I have noticed that the first guess someone makes usually fails, but that the second or third guess is often correct or close to the answer. For example, not long ago I said to one of my daughters, "You'll never guess what I saw down the street." I gave no clues, but asked her to guess. After a wrong guess of an orange cat, her second guess was "a raccoon," which is just what I saw. No one in our family had seen such a thing on our street before. Later in the day I asked her what I saw in a weird dream I recently had, mentioning only that it involved something odd in our front yard. After a wrong first guess of a snowman, she asked, "Was it a wild animal?" I said yes. Then she asked, "Was it an elephant?" I said yes.

(5) After the person makes the first guess, suggest that the person take 10 seconds before making each of the next guesses. Throughout the entire guessing session, you should be trying hard to visualize the thing you are asking the person to guess. Slowing the process down by suggesting 10 seconds between guesses may increase the chances of your thought traveling to the subject you are testing. 

(6) Only test using a guess target that is some simple thing that you can clearly visualize in your mind.  Do not test using a guess target of some complicated scene involving multiple actions or interacting objects.  For example, don't ask someone to guess some scene you saw that involved someone dropping his coffee and spilling it on his feet. Use a guess target of some single object or a single animal or a single human. Testing with types of animals seems to work well. If the test object or animal has a strong color or some characteristic action, all the better. Do not test using some extremely common sights such as your family dog. Success with such a test will not be very impressive. It's better to use a rarer sight, maybe something you see as rarely as a donkey or a racoon. 

(7) Answer only yes or no questions, counting each question as one of the three or four allowed guesses.  You can include a single "You're getting warm" answer instead of a "no" answer, but no more than one.  

(8) Very soon after the test, write down the results, recording all guesses and questions, and any responses you made such as "yes" or "no."  With testing like this, the last thing you want to rely on is a memory of some event happening weeks ago.  Write down the results of your test, positive or negative, within a few minutes of an hour of making the test. 

(9) Do a single test (allowing three or four guesses) only about once every week or two weeks. There may be a significant fatigue factor in such tests. A person who does well on such a test may not continue to do well if you keep testing him on the same day. To avoid such fatigue and to avoid annoying people with too many tests, it is good to just suggest a casual test as described above, once every week or two weeks. Keep a long-term record of all tests of this type you have done, recording failures as well as successes. 

(10) It's best not to announce the test as an ESP test or as a telepathy test, but to describe it as a quick guessing game or a test of chance. Our materialist professors have senselessly succeeded in creating very much unreasonable prejudice and bias against psychic realities that are well-established. So the mere act of announcing an ESP test may cause your subject to raise mental barriers that may prevent any successful result. To avoid that, it is best to describe your test as a quick guessing game or a test of chance.  

(11) It's best to choose a guess target that you personally saw either in reality or in a dream.  The more personal connection you have with the guess target, the better. Something that you personally saw recently (either in reality or a dream) may work better than something you merely chose randomly. The more your recent sensory experience of the guess target, the better it is. Choosing a guess target of something you both saw and touched may work better than something you merely saw. The more you have thought about the guess target, the better. It's better that the object have one or two colors than many colors, and the brighter the color is, the better. 

(12) Be cautious in publicly reporting successful results.  I would wait until you get three or four good successful tests before reporting anything about such tests on anything like social media. Also, avoid reporting your results as evidence of anything, unless you have something very impressive to report. Social media has a horde of skeptics ready to attack you if you claim evidence for ESP based on slim results. A good rejoinder to such attacks is if you can say, "I'm not claiming anything, I'm just reporting what happened.

ESP test

Above we see some guess targets that were successfully guessed after only a few guesses, in trials in which the guesser was not told that the item was an animal or anything living.  There were about nine trials, with one or two other trials being unsuccessful, and one being a partial success.  The guess targets were only in my mind, and I compiled the visual above only after these items were guessed correctly. 

Tests that you do of this type will be unlikely to ever constitute any substantial contribution to the literature of parapsychology, unless you follow a very formal approach with an eye towards making such a contribution. But such tests may have the effect of helping you to realize or suspect extremely important truths about yourself and other human beings that you might never have realized. A person might read a dozen times about experiments suggesting something, but the truth of that thing may never sink in until that person has some first-hand experience with the thing.  

Whether ESP or telepathy can occur is something of very high philosophical importance. There is a reason why materialists show a very dogmatic refusal to seriously study the evidence for telepathy. It is that if telepathy can occur, the core assumptions of materialism must be false. Telepathy could never occur between brains, but might be possible between souls. So any personal evidence you may get of the reality of telepathy can be a very important clue aiding you in your philosophical journey towards better understanding what humans are, and what kind of universe we live in. 

Using a binomial probability calculator it is possible to very roughly estimate the probability of getting success in a series of about nine tests like the one above. To use such a calculator, you have to have an estimate of the chance of success per trial. With tests like those I have suggested, it is hard to exactly estimate such a thing, because you are choosing a guess target that could be any of 100,000 different things.  One reasonable approach would be to assume 100,000 different guess possibilities. The chance of a successful guess in only four guesses can be calculated like this, giving a result of only .00004.

probability calculation

The screen above is using the StatTrek binomial probability calculator, which doesn't seem to work whenever the probability is much less than a million. A similar calculator is the Wolfram Alpha binomial probability calculator, which will work with very low probabilities. I used that calculator with the data described in my post here. The situation described in that post was:

  • Each correct guess had a probability no greater than about 1 in 10,000, as I never mentioned the category of what was to be guessed, but always merely asked a relative to guess after saying  something like "I saw something today, try and guess what it was" or "I dreamed of something today, try to guess what it was."
  • Counting all questions asked (which were all "yes or no" questions) as a guess, there were in about nine guessing trials involving nine targets a total of about 37 guesses. 
  • Six times the guess target was correctly guessed within a few guesses, and one time the answer was wrong but close (with a final guess of a red bicycle rather than a red double-decker bus, both being red vehicles).  
Counting the close guess as a failed attempt, I entered this data into the Wolfram Alpha binomial probability calculator, getting these results (with this calculator the "number of successes" is referred to as the "endpoint"):

ESP test result

Having a probability of less than about 1 in .00000000000000001, it would be very unlikely for anyone to ever get a result as successful by mere chance, even if every person on planet Earth were to try such a set of trials. You can use the same 
Wolfram Alpha binomial probability calculator to get a rough estimate of the likelihood of results you get. 

I mention using a binomial probability calculator above, but just ignore such a thing if you find it confusing, because the use of such a calculator is just some optional "icing on the cake" that can be used after a successful series of tests. The point of the tests I suggest here is not to end up with some particular probability number, but mainly to end up with an impression in your mind of whether you were able to get substantive evidence that telepathy or mind reading is occurring. Such an impression may be a valuable clue that tends to point you in the right direction in developing a sound worldview. Some compelling personal experience with telepathy may save you from a lifetime of holding the widely taught but unfounded and untenable dogma that you are merely a brain or merely the result of some chemical activity in a brain.  Getting such experience, you may embark on further studies leading you in the right direction. Keep in mind that a negative test never disproves telepathy, just as failing to jump a one-meter hurdle does nothing to show that people can never jump one-meter hurdles. 

In the academic literature of ESP testing, we often read about the use of Zener cards, cards in which there are five abstract symbols. While using such cards has the advantage of allowing precise estimates of probability,  there is no particular reason to think that better results will be obtained when using such cards. To the contrary, it may be that impressive results are much less likely to be obtained using such cards, and that ESP tests work better when living or tangible guess targets are used such as a living animal or a tangible object. 

A very important point I must reiterate is that when trying tests such as I have suggested, it is crucial to allow for a second, third and fourth guess, with at least ten seconds between guesses (during which the person thinking of the guess target tries to visualize the guess target).  In my testing the correct guesses tend to come on the second, third or fourth try. 

The results mentioned above are not by any means the best result I have got in a personal ESP test. The beginning of my very interesting post "Spookiest Observations: A Deluxe Narrative" describes a much more impressive result I got long ago, in a different type of test than the type of test described above.  

Wednesday, September 17, 2025

More Candid Confessions of the Neuroscientists

 In my post "Candid Confessions of the Cognition Experts" which you can read here, and another similar post, I quote some cognition experts and neuroscientists who make confessions about matters such as the sorry state of neuroscience research, and how little neuroscientists understand how minds arise. Below are some more quotes of this type. 

I'll start with some quotes mostly using the phrase "in its infancy." Whenever scientists confess that something is "in its infancy," they are effectively admitting they do not have good knowledge about such a topic.

  • "Despite recent advancements in identifying engram cells, our understanding of their regulatory and functional mechanisms remains in its infancy." -- Scientists claiming erroneously in 2024 that there have been recent advancements in identifying engram cells, but confessing there is no understanding of how they work (link).
  • "Study of the genetics of human memory is in its infancy though many genes have been investigated for their association to memory in humans and non-human animals."  -- Scientists in 2022 (link).
  • "The neurobiology of memory is still in its infancy." -- Scientist in 2020 (link). 
  • "The investigation of the neuroanatomical bases of semantic memory is in its infancy." -- 3 scientists, 2007 (link). 
  • "Currently, our knowledge pertaining to the neural construct of intelligence and memory is in its infancy." -- Scientists, 2011 (link). 
  • "But when it comes to our actual feelings, our thought, our emotions, our consciousness, we really don't have a good answer as to how the brain helps us to have those different experiences." -- Andrew Newberg, neuroscientist, Ancient AliensEpisode 16 of Season 14, 6:52 mark. 
  • "Dr Gregory Jefferis, of the Medical Research Council's Laboratory of Molecular Biology (LMB) in Cambridge told BBC News that currently we have no idea how the network of brain cells in each of our heads enables us to interact with each other and the world around us."  -- BBC news article (link). 

By making such confessions, scientists are admitting that they do not actually understand how a brain could store or retrieve memories. The reason for such ignorance (despite billions of dollars on funding to try to answer such questions) is almost certainly that the brain does not actually store memories and is not the source of the human mind.  

A similar confession is found in the recent paper here, where scientists confess "It remains unclear where and how prior knowledge is represented in the brain." The truth is that no one has ever found the slightest evidence of any such thing as prior knowledge being represented in the brain, and no one understands how learned knowledge could ever be represented in a brain. 

An interesting paper is the paper "On the omission of researchers' working conditions in the critique of science: Critique of neuroscience and the position of neuroscientists in economized academia" by Eileen Wengemuth.  Wengemuth interviewed 13 neuroscientists about critiques of neuroscientists, apparently agreeing to quote them anonymously. She got some revealing quotes. 

A neuroscientist identified only as NW12 states this: "We still don't understand how molecules contribute to consciousness or the mind.”  On page 85 a neuroscientist identified only as NW2 makes a confession, which has a kind of "we must publish a paper even when we know it's junk" sound to it. First Wengemuth tells us this:

"One interviewee recounts an incident in which a new colleague pointed out a flaw in an experimental setup, which limited the validity of the conclusions drawn from the experiment. However, since she needed to have a publication soon, the interviewee [NW2]  describes that it seemed not possible to change the experimental setup and to repeat the experiment."

Immediately after that description, we have a quote from NW2:

NW2:  "She had a very good point and we never thought about it in two years of doing this experiment. We have a problem. (Both laugh) And nevertheless, we have to publish, because... you know, it's two years of work! So we will discuss this, we will account for it, we will try our best, but we probably don't want to rerun the whole experiment saying 'Oh, what happens if we change this other thing.' Once we've reached our conclusions..." 

I: "You said: 'But we still have to publish.' Did you mean, for example, that you got some grants and now you have to show, ok, we did something with that money?"

NW2: "Not so much based on grant money, but in terms of career. (...) I need papers to get my next job."

Get the idea? "The show must go on" as they say in the theater business. And apparently scientific papers must be published, to advance the career goals of neuroscientists,  even after it has become clear that bad methods were used (which seems like the majority of the time in contemporary neuroscience research). Discussing the quote above, Wengemuth says, "In this interview clip, it becomes clear that the interviewee perceives her working and research conditions as not allowing her to work in a way that would meet her own standards of good science."

On page 85 a neuroscientist identified only as NW9 seems to suggest that guys like him are playing fast-and-loose in their interpretations of what their experiments show, in order to get interesting-sounding claims that may increase the chance of publication in "high-impact" journals:

NW9: "We are working in a structure in which an increasing number of people are on third-party-funded positions, which are temporary. And one important criterion that decides who stays and who doesn't, is: who has published where? So journal impact factors. And publishing high impact often means: generalizing as much as possible in the interpretations and throwing as many limitations as possible overboard. "

Wengemuth describes what is occurring in that quote: "He argues here that the broad claims for which neuroscientists have been criticized also have to be understood as a way of getting one's article published in a high impact journal and thus increasing one's chances for a next job." Get the idea? Our neuroscientists are prioritizing career advancement over accuracy of statements.  It sounds like they are playing "fake it until you make it."

bribed neuroscientist
Yeah, right

A survey of Danish researchers found large fractions of them confessing to committing various types of shady or sleazy Questionable Research Practices.  A year 2024 follow-up study found a similar level of confession in other countries. The paper is entitled "Is something rotten in the state of Denmark? Cross-national evidence for widespread involvement but not systematic use of questionable research practices across all fields of research."  The title is inaccurate, because the confessions do reveal a systematic use of Questionable Research Practices.  Figure 2 of the paper reveals these confessions:

  • About 60% of the polled researchers confessed to citing literature without reading it. 
  • About 50% of the polled researchers confessed to reporting non-significant findings as evidence of no effect. 
  • About 50% of the polled researchers confessed to granting "honorary authorship" to authors who did not participate in the study. 
  • More than 50%  of the polled international researchers confessed to overselling results. 
  • About 50% of the polled researchers confessed to HARKing, which is when some hypothesis is dreamed up to explain results in an experiment not designed to test such a hypothesis. 
  • About 50% of the polled international researchers confessed to cherry-picking what data supports a hypothesis and what does not. 
  • About 40% of the polled international researchers confessed to data dredging or p-hacking, a practice some times described as "keep torturing the data until it confesses."
  • About 40% of the polled international researchers confessed to have refrained from reporting data that could weaken or contradict their findings. 
  • About 30of the polled international researchers confessed to gathering more data after the initially gathered data failed to show a significant effect. 

questionable research practices

smoke and mirrors neuroscience