Tuesday, May 26, 2020

Groupthink and Peer Pressure Make It Taboo for Neuroscientists to Put Two and Two Together

Why do so many neuroscientists go far astray in their dogmatic declarations about the brain? To understand the speech tendencies of neuroscientists, we must understand the environments that create and employ such scientists. Neuroscientists are created in university departments that are ideological enclaves. An ideological enclave is some environment where almost everyone believes in some ideology that the majority of human beings do not profess. Different departments of a university may tend to be places where different ideologies are concentrated.

A seminary is an example of an ideological enclave. A seminary is an institution where people are trained to be ministers or priests of some particular religion. A university graduate school program (one issuing masters degrees and PhD's in some academic specialty) may also be an example of an ideological enclave. Just as a seminary trains people to think in one particular way, and to hold a particular set of unproven beliefs, many a university graduate program may train people to think in a particular way, and to hold a particular set of unproven beliefs. Neuroscience graduate school programs tend to train people to believe that all mental phenomena have a cause that is purely neural, and that your mind is merely the activity of your neurons. This strange belief is not a belief professed by the majority of human beings.

It would be incredibly hard for any ideological enclave to enforce its belief ideology if the enclave got its members by some random selection process that gave it new members reflecting the thinking of the general population. Instead, things are much easier for the ideological enclave. There is what we can call a magnet effect by which the ideological enclave only gets new trainees when people choose to join the enclave. This guarantees that each new set of trainees will tend to be people favoring the ideology of the enclave. The great majority of the people signing up to be trained in the ideological enclave will be those attracted to its ideology. The great majority of the people signing up to be trained in a theological seminary will be those who favor the theology being taught in that seminary. Similarly, the great majority of the people signing up for a university graduate program in neuroscience or evolutionary biology will be people favoring the belief dogmas popular in such programs.

Once a person starts being trained in an ideological enclave, he will find relentless social pressure to conform to the ideology of that enclave. This pressure will continue for years. The pressure will be applied by authorities who usually passed through years of training and belief conditioning by the ideological enclave, or a similar ideological enclave elsewhere. In a seminary such authorities are ministers or priests, and in a university graduate such authorities are professors or instructors. Finally, after years of belief conditioning the person who signed up for the training will be anointed as a new authority himself. In the university graduate program, this occurs when something like a master's degree or a PhD or a professorship is granted. In a seminary, this may occur when someone becomes a minister or priest.

Groupthink is a tendency for some conformist social unit to have overconfidence in its decisions or belief customs, or unshakable faith in such things. Groupthink is worsened by any situation in which only those with some type of credential (available only from some ideological enclave) are regarded as fit to offer a credible judgment on some topic. In groupthink situations, an illusion of consensus may be helped by self-censorship (in which those having opinions differing from the group ideology keep their contrary opinions to themselves, for fear of being ostracized within the group). In groupthink situations, belief conformity may also be helped by so-called mindguards, who work to prevent those in the group from becoming aware of contrarian opinions, alternate options or opposing observations. In an academic community such mindguards exist in the form of peer-reviewers and academic editors who prevent the publication of opinions and data contrary to the prevailing group ideology. We saw an example of such conformity enforcement in neuroscience not long ago when an “outrage mob” of 900 petitioners forced the retraction of a neuroscience paper which seemed to have no sin worse than contrarian thinking.


ideological enclave


For the person who completes the program of a university graduate program, and gets his master's degree or PhD, is that the end of the conformist social influence, the end of the pressure to believe and think in a particular way? Not at all. Instead, the “follow the herd” effect and the pressure to tow the “party line” of the belief community typically continues for additional decades. The newly minted PhD rarely goes off on his own to become an independent thinker marching to his own drummer, outside of the heavy influence of the belief community. Instead, such a person usually becomes a kind of captive of a belief community. The newly minted PhD will very often get a job working for the very ideological enclave that trained him, a particular academic department of a university. Or, he may end up employed by some very similar academic department of some other university, a place that is an ideological enclave just like the one in which he was trained. Such employment typically lasts for decades, during which someone may be stuck in a kind of echo chamber in which everyone parrots the same talking points. So when there is groupthink and ideological conformity in some academic specialty, peer pressure can continue to act for decades on someone like a neuroscientist or a string theorist or an evolutionary biologist.

Such peer pressure can be something that tells people they are  supposed to think in one way, and may also be something that tells people they should not think in some other way. The enforcement of belief taboos and speech taboos is one of the main tendencies of ideological enclaves and belief communities. Such taboos are promoted by those interested in preserving the ideological cohesiveness of the belief community. The belief community of neuroscientists enforces thinking taboos that can prevent neuroscientists from reaching conclusions that follow rather obviously from particular observations. Such taboos can make it culturally forbidden for neuroscientists to put two and two together. “Put two and two together” is a phrase referring to reaching an obvious conclusion. Let me give some examples where belief taboos prevent neuroscientists from putting two and two together.

Example #1: Near-death Experiences and Apparitions

Human beings often have near-death experiences. In such experiences people very often report floating out of their bodies and observing their bodies from a distance. It is quite common for extremely vivid near-death experiences to occur during cardiac arrest, when brain activity has shut down because the heart has stopped. The type of accounts given by those who have near-death experiences tend to have very similar features, the type of items listed on the Greyson Scale. These include things such as passing through a tunnel, encountering deceased relatives, feelings of peace and joy, being told to go back when reaching a border or boundary between life and death, and so forth. Near-death experiences do not have the kind of random content we would expect from hallucinations. Near-death experiences also very often occur when any brain hallucination should be impossible, because the heart has stopped and electrical activity in the brain has stopped. When people report having near-death experiences when their hearts are stopped, they can often recall details of the activity of medical personnel working nearby them, details they should not have been able to observe given their deeply unconscious medical condition.

In addition, perfectly healthy humans are often surprised to see an apparition of someone they did not know was dead, only to soon find out later that the corresponding person did die, typically on the same day and hour as the apparition was seen. You can read about 165 such cases here, here, here, here, here, here and here. Moreover, a single apparition is often seen by multiple witnesses, as discussed in 50+ cases here and here and here and here.

There is a very clear conclusion that must be reached when someone puts two and two together regarding what we know about near-death experiences and apparitions. The conclusion is that human consciousness is not actually a product of the brain, and can continue even when the brain has stopped working because of cardiac arrest. But to conclude such a thing would be to violate a belief taboo enforced by groupthink and peer pressure in the neuroscientist belief community. The belief taboo is that you cannot believe in any type of human soul, but must believe that all human mental activity comes purely from neurons. So in this case the social taboo (enforced by groupthink and peer pressure) prevents neuroscientists from putting two and two together.

Example #2: The Lack of Anything in Brains Suitable for Long-Term Memory Storage or Instant Memory Retrieval

Humans are capable of accurately remembering episodic memories and learned information for more than 60 years. Humans also routinely show the ability to instantly recall information learned many years ago, given a single prompt such as a question or the mention of a name or place. But we know of nothing in the brain that can explain such abilities.

A computer hard disk may read and write information by using a spinning disk and a read-write head, but we know of no similar thing in the brain. We know of nothing in the brain that seems like a unit specialized for reading stored information, nor do we know of anything in the brain that seems like some unit specialized for writing information. No one has ever discovered any type of encoding system by which any of the vast varieties of information humans remember could ever be translated into neural states or synapse states. Nor has anyone ever discovered anything like some indexing system that might explain how humans could instantly recall things.

Although it is often claimed that memories are stored in synapses, the proteins that make up synapses are very short-lived, having lifetimes of only a few weeks or less. There is nothing in the brain that is a plausible candidate for a place where memories might be stored for either several years or six decades. Humans are able to remember very large bodies of information with 100% accuracy, as we see on the stage when we see an actor recall all of the lines of the role of Hamlet without error or all of the lines and notes of the roles of Wagner's Siegfried or Tristan without error. But such 100% recall of large bodies of learned information should be impossible if it occurred through neural activity, given the high levels of signal noise in a brain. It has been estimated that when a neural signal travels from one neuron to another in a cortex, the signal transmission occurs with far less than 50% reliability. Other than the genetic information in DNA, no one has ever found any sign of stored information in a brain, such as memory information that could be read from a dead organism after it died.

There is a very clear conclusion that must be reached when someone puts two and two together regarding what we know about the limits of the human brain. The conclusion is that the brain cannot be the storage place of human memories. But to conclude such a thing would be to violate a belief taboo enforced by groupthink and peer pressure in the neuroscientist belief community. The belief taboo is that you cannot believe that any major facet of the human mind comes from something other than the brain, but must believe that all human mental activity comes purely from neurons. So in this case the social taboo (enforced by conformist groupthink and peer pressure) prevents neuroscientists from putting two and two together.

Example #3: The Results of Hemispherectomy Operations or Even Greater Brain Tissue Loss

A hemispherectomy operation is an operation in which half of a patient's brain is removed, typically to stop very bad seizures the person is suffering from. Hemispherectomy operations provide an excellent test for dogmas regarding the brain. From the dogma that the brain is the cause of human intelligence and the storage place of memories, we should expect that suddenly removing half of someone's brain should cause at least a 50% drop in intelligence, along with a massive loss of memories and learned information.

Nothing of the sort happened when such operations were done. You can read about the exact effects of hemispherectomy operations by reading my posts here and here and here and here. In most cases hemispherectomy operation does not cause a significant reduction in intelligence as measured by IQ tests. In quite a few cases, someone did better in an IQ test after half of his brain was removed in a hemispherectomy operation. Hemispherectomy operations also do not seem to cause major loss of memories.

Brain-ravaging natural diseases sometimes provide an even better test of dogmas about the brain. Such diseases often remove much more than half of a person's brain. Astonishingly, the result is often a person of normal intelligence and sometimes even above-average intelligence. The physician John Lorber studied many cases of people who had lost the great majority of their brains, mostly because of a disease called hydrocephalus. Lorber was astonished that more than half of such patients had above-average intelligence. Then there are cases such as the case of the French person who managed to long hold a civil servant job, even though he had almost no brain

There is a very clear conclusion that must be reached when someone puts two and two together regarding what we know about how loss of half or most of the brain has little effect on intelligence or memory. The conclusion is that the brain cannot be the storage place of human memories, and cannot be the source of human intelligence. But to conclude such a thing would be to violate a belief taboo enforced by groupthink and peer pressure in the neuroscientist belief community. The belief taboo is that you cannot believe that any major facet of the human mind comes from something other than the brain, but must believe that all human mental activity comes purely from neurons. So in this case the social taboo (enforced by an echo chamber of groupthink and peer pressure) prevents neuroscientists from putting two and two together.



In this regard we may compare neuroscience departments of universities to some bizarre pharmaceutical manufacturer that allows its researchers to note when the company's pill causes a person to collapse, turn white, and stop breathing, but makes it a taboo for researchers to put two and two together and conclude that the company's pill is dangerous. 

Saturday, May 2, 2020

Your Physical Structure Did Not Arise Bottom-Up, So Why Think Your Mind Did?

Neuroscientists typically maintain that human mental phenomena are entirely produced by the brain. But this claim is mainly a speech custom of a social group, a belief dogma of a belief community, rather than something that is justified by facts. Looking at the human mind, we find again and again characteristics and abilities that cannot be credibly explained though any known features of the brain. Consider the following:
  1. Humans are able to recall extremely esoteric or distant items of information instantly. For example, I scored more than 50% on a pair of Youtube.com challenge videos playing 40 musical themes from the 1960's and 1970's TV shows, without offering any set of choices to choose from. And upon hearing of some obscure historical or literary figure he haven't heard of in 40 years, a 60-year-old may be able to identify him. But we know of nothing in a brain that could allow such instantaneous recall. Computer information systems that retrieve information instantly can do this because of features such as b-trees, hashing and indexes that are unlike anything in the human brain. 
  2. For many types of performers such as Shakespearean actors and Wagnerian tenors, recall of voluminous learned information occurs with an accuracy of at least 99%. But in neurons and the supposed storage place of memories (synapses), there are multiple types of signal noise that are believed to prevent chemical/electrical signals from being transmitted at more than a 50% accuracy. Since a chemical/electrical signal would have to pass through many different neurons and synapses, we would expect a neural recall of memory to have much less than 10% accuracy. 
  3. Humans can remember things very well for more than 50 years, but synapses (the supposed storage place of memories) are made up of proteins that have an average lifetime of only a few weeks. Based on this fact, we should not expect synapses to be able to store memories for more than a few weeks. 
  4. Humans are capable of thought, reflection, insight, imagination, and creativity, but we know of no specific features in the brain that might allow any of these things. We know of no real reason why a single neuron should be thoughtful, reflective, insightful, imaginative or creative, and we know of no real reason to suppose that billions of connected  neurons should be thoughtful, reflective, insightful, imaginative  or creative.
  5. Computers are able to store information rapidly and recall information rapidly partially because they have a specific component called a read-write head that handles such functions. But we know of no specific component in a brain that might act like a write mechanism, nor do we know of any specific component in a brain that might act like a read mechanism. 
  6. For a human brain to be able to store memories, it would need to have some incredibly sophisticated and elaborate encoding system whereby information that humans can recall (images, words, abstract concepts, feelings and episodic memories) could be translated into stored neural states. Nothing like any such encoding system has ever been discovered. If it were ever discovered it would be a miracle of design that would worsen a thousand-fold the problem of naturally explaining the origin of humans. 
  7. As discussed here, there is very good experimental evidence for paranormal abilities such as ESP, evidence that cannot be explained by brain activity. 

Clearly, the human brain is an extremely poor candidate for something that can explain the human mind. But people continue to cling to the idea that the brain generates the mind (or the equally faulty idea that the brain is the mind). If you ask someone to justify such a belief, the person may say something like this: “You must believe your mind comes from your brain, because there's no other organ in the body that could be making the mind – and of course it would be ludicrous to believe that the mind comes from something other than the body.” But such an idea should not seem ludicrous in the least when we consider that another huge aspect of ourselves – the human form or structure – cannot possibly have arisen bottom-up from anything in our bodies, and must somehow arrive from outside of our bodies or from something different from our bodies.

Let us consider how little we know about how humans come into the world. When a sperm unites with a female ovum, the result is a speck-like fertilized egg. But somehow over 9 months, there occurs a progression leading from this tiny speck to a full human baby. This process is sometimes called morphogenesis or embryogenesis. How does this progression happen? We have basically no idea.

For decades many have pushed an untenable misconception about morphogenesis. The idea is that DNA in a cell contains a blueprint or set of instructions for making a human, and that morphogenesis occurs when such instructions are read and carried out inside the human womb. But there are several reasons why this idea cannot possibly be true. They include the following:
  1. Human DNA has been thoroughly studied, and no blueprint of a human form has ever been discovered in it, nor has anyone discovered anything in it like a program, algorithm, or set of instructions for making a human, or even any organ or cell of a human. There is not anything like a general blueprint for an overall human form in DNA, nor is there anything like a blueprint for making any large system of a human, nor is there anything like a blueprint for making any organ of a human, nor is there even anything like a blueprint for making a particular type of human cell. Similarly, there is not anything like a set of instructions or program for making an overall human form in DNA, nor is nor is there anything like a set of instructions or program for making any large system of a human, nor is there anything like a set of instructions or program for making any organ of a human, nor is there even anything like a set of instructions for making a particular type of human cell.
  2. The actual information in DNA is merely very low level chemical information, information on the chemical ingredients that make up proteins and RNA. 
  3. DNA is written in a minimalist bare-bones language in which the only things that can be expressed are things such as lists of amino acids. There is absolutely no high-level expressive capability in DNA that might ever allow it to be something that might be a blueprint for making humans or a set of instructions for making humans. 
  4. The amount of information in human DNA and the number of genes in DNA are vastly smaller than we would expect if DNA was a specification of a human. For example, a simple rice plant has twice as many genes as a human. 
  5. There is nothing in the human womb that could ever be capable of reading and executing the fantastically complicated instructions that would need to exist in DNA if DNA were to be a specification of a human. Blueprints don't build things; building construction occurs only when there's an intelligent blueprint reader and a construction crew. We know of nothing in the human womb that could act like an intelligent blueprint reader or a construction crew. If a human specification were to exist in DNA, it would need to be instructions so complicated it  would require an Einstein to understand it; and there's no Einstein in the womb of a pregnant woman. 

See this post, this post and this post for a very detailed discussion of why DNA cannot be a human specification. Those posts include quotes by quite a few biological experts supporting my statements on this topic.  Below are only a few of more than a dozen similar comments that I have collected at the end of this post.


On page 26 of the recent book The Developing Genome, Professor David S. Moore states, "The common belief that there are things inside of us that constitute a set of instructions for building bodies and minds -- things that are analogous to 'blueprints' or 'recipes' -- is undoubtedly false." Scientists Walker and Davies state this in a scientific paper:

"DNA is not a blueprint for an organism; no information is actively processed by DNA alone. Rather, DNA is a passive repository for transcription of stored data into RNA, some (but by no means all) of which goes on to be translated into proteins."

Geneticist Adam Rutherford states that "DNA is not a blueprint." A press account of the thought of geneticist Sir Alec Jeffreys states, "DNA is not a blueprint, he says."  B.N. Queenan (the Executive Director of Research at the NSF-Simons Center for Mathematical & Statistical Analysis of Biology at Harvard University) tells us this:


"DNA is not a blueprint. A blueprint faithfully maps out each part of an envisioned structure. Unlike a battleship or a building, our bodies and minds are not static structures constructed to specification."

"The genome is not a blueprint," says Kevin Mitchell, a geneticist and neuroscientist at Trinity College Dublin. "It doesn't encode some specific outcome."  His statement was reiterated by another scientist. "DNA cannot be seen as the 'blueprint' for life," says Antony Jose, associate professor of cell biology and molecular genetics at the University of Maryland. He says, "It is at best an overlapping and potentially scrambled list of ingredients that is used differently by different cells at different times."  Sergio Pistoi (a science writer with a PhD in molecular biology) tells us, "DNA is not a blueprint," and tells us, "We do not inherit specific instructions on how to build a cell or an organ."

The visual below shows you the very humble reality about DNA (so much less than the grossly inflated myths so often spread about it): that DNA merely specifies low-level chemical information such as the amino acids that make up a protein.  Particular combinations of the "ladder rungs" of the DNA (the colored lines) represent particular amino acids (the "beads" in the polypeptide chain that is the starting point of a protein). 


DNA

Human bodies have multiple levels of organization beyond such simple polypeptide chains, including: 

  • The three-dimensional structure of protein molecules
  • The three-dimensional structure of the 200 types of cells in the human body, most of these cell types being fantastically complicated arrangements of matter (scientists have compared the complexity of a cell to the complexity of an airplane or city)
  • The structure of tissues
  • The structure of organ systems and skeletal systems
  • The overall structure of the human body, what you see by looking at a naked human body

None of the structures listed above are specified by DNA or genomes or genes. How such structures arise is unknown. 

In light of the facts I have discussed, we must draw a very important conclusion: the biological form of an individual (his overall body plan or structure) cannot originate bottom-up from something within the human body. The physical structure of a human must come from some mysterious source other than the human body or outside of the body. Much as we would like to believe the widely circulated myth that the form of your body comes from your DNA, the facts do not at all support such an idea. We know of nothing in the human body that can be the source of the human form or body plan, nothing that can explain the marvel of morphogenesis, the progression from a speck-sized egg to a full-sized human body. So the human form or physical structure or human body plan must somehow come from outside of the body or from some source other than the body. 

The person who has carefully considered such a reality should have no objection to the idea that the human mind must come from some source outside of the body or different from the human body. Both conclusions follow from similar types of evidence considerations. Just as DNA fails in every respect to be a credible source for the human physical form, the brain fails in almost every respect to be a credible source of the human mind (for reasons discussed at great length in the posts of this site).

We must climb out of the tiny thought box of materialism and consider other possibilities. One possibility is that the human mind comes from some spiritual or energy reality that co-exists with the human body. In such a case it might be true that the mind of each person has a different source, but not a bodily source. Another possibility is that every human mind comes from the same source, some mysterious and unfathomable cosmic reality that might also be the source of the human physical form.

To gain some insight on how we have been conditioned or brainwashed to favor a bad type of explanation for our physical structure and minds, let us consider a hypothetical planet rather different from our own: a planet in which the atmosphere is much thicker, and always filled with clouds that block the sun.  Let's give a name to this perpetually cloudy planet in another solar system, and call this imaginary entity planet Evercloudy.  Let's imagine that the clouds are so thick on planet Evercloudy that its inhabitants have never seen their sun.  The scientists on this planet might ponder two basic questions:

(1) What causes daylight on planet Evercloudy?
(2) How is it that planet Evercloudy stays warm enough for life to exist?

Having no knowledge of their sun, the correct top-down explanation for these phenomena, the scientists on planet Evercloudy would probably come up with very wrong answers. They would probably speculate that daylight and planetary warmth are bottom-up effects.  They might spin all kinds of speculations such as hypothesizing that daylight comes from photon emissions of rocks and dirt, and that their planet was warm because of heat bubbling up from the hot center of their planet.  By issuing such unjustified speculations, such scientists would be like the scientists on our planet who wrongly think that life and mind can be explained as bottom-up effects bubbling up from molecules. 

Facts on planet Evercloudy would present very strong reasons for rejecting such attempts to explain daylight and warm temperatures on planet Evercloudy as bottom-up effects. For one thing, there would be the fact of nightfall, which could not easily be reconciled with any such explanations. Then there would be the fact that the dirt and rocks at the feet below the scientists of Evercloudy would be cold, not warm as would be true if such a bottom-up theory of daylight and planetary warmth were correct.  But we can easily believe that the scientists on planet Evercloudy would just ignore such facts, just as scientists on our planet ignore a huge number of facts arguing against their claims of a bottom-up explanation for life and mind (facts such as the fact that people still think well when you remove half of their brains in hemispherectomy operations, the fact that the proteins in synapses have very short lifetimes, and the fact that the human body contains no blueprint or recipe for making a human, DNA being no such thing). 

We can imagine someone trying to tell the truth to the scientists on planet Evercloudy:

Contrarian: You have got it very wrong. The daylight on our planet and the warmth on our planet are not at all bottom-up effects bubbling up from under our feet.  Daylight and warmth on our planet can only be top-down effects, coming from some mysterious unseen reality beyond the matter of our planet. 
Evercloudy scientist:  Nonsense! A good scientist never postulates things beyond the clouds. Such metaphysical ideas are the realm of religion, not science. We can never observe what is beyond the clouds. 

Just as the phenomena of daylight and planetary warmth on planet Evercloudy could never credibly be explained as bottom-up effects, but could only be credibly explained as top-down effects coming from some mysterious reality unknown to the scientists of Evercloudy, the phenomena of life and mind on planet Earth can never be credibly explained as bottom-up effects coming from mere molecules, but may be credibly explained as top-down effects coming from some mysterious unknown reality that is the ultimate source of life and mind. 

Tuesday, April 21, 2020

A Diagram of Explanatory Dysfunction in Academia

Below is a diagram that attempts to illustrate some of the explanatory problems afflicting colleges and universities. A major factor in such problems are what we may call achievement legends.


academia problems
       Click to open in a separate tab                 

An achievement legend is a story that is repeatedly told in the classrooms of colleges and universities, a story claiming without proof that some wonderful progress was made by scientists. Such legends include the ones below and many others:
  • The story that the origin of species and the origin of humanity were successfully explained by the nineteenth century biologist Charles Darwin.
  • The story that neuroscientists have been able to figure out where the human mind comes from and how human memory works, by studying the human brain.
  • The story that the “book of life” was discovered in the middle of the twentieth century, when scientists found some molecule that contained a blueprint or recipe for making human beings.
  • The story that scientists have been been able to figure out important truths about the evolution of the universe, by coming up with ideas such as dark matter and primordial exponential cosmic inflation (not to be confused with ordinary expansion).

All of these stories meet the definition of legend, which is “a traditional story sometimes regarded as historical but unauthenticated.” None of these claims of achievement has been proven. There are very substantial reasons for rejecting each one of them. But these stories of great achievements keep being told again and again by our professors. One of the hardest things to dispel is a dubious achievement legend once it has spread. Such legends provide prestige boosts to various people in academia, and once a person has been hooked on an intoxicating conceit, it becomes incredibly hard for such a person to give it up and adopt a more realistic viewpoint about his relatively modest state of knowledge.

These achievement legends are pillars of a worldview that is predominant at colleges and universities. The worldview is based on the idea that the most important realities such as life and mind are explained by matter, and that such realities arose from blind accidental processes. Believing in the smug achievement legends, those who hold this worldview believe that scientists are making excellent progress in coming up with purely material explanations for life and mental phenomena.

But such a worldview is contradicted by a gigantic number of observational facts. Such facts and observations can be divided into two different categories: the paranormal and the not-at-all-spooky. In the paranormal category are a host of observations that mainstream professors may dismiss as “impossible,” even though they are massively reported. These include things such as apparition sightings, deathbed visions, near-death experiences, anecdotal accounts of clairvoyance, extremely high scores on tests of extrasensory perception (such as we would never expect any human to get by chance), photos of dramatic recurring patterns in mysterious orbs, and so forth. But there is also a huge number of "not at all spooky" observations that stand in opposition to the prevailing academic worldview. Among these are observations such as these:

  1. Low-level observations indicating that DNA does not have and cannot have a blueprint or recipe for making a human.
  2. Innumerable observations of incredibly organized biological complexity vastly more fine-tuned than anything that can reasonably be explained as a result of chance.
  3. Observations proving that brain tissue does not have the long-term information storage capability it would need to have if prevailing ideas about brains and minds are correct.
  4. Observations proving that massive damage to brains often has only slight effects on mind and memory, contrary to what we would expect under the dogma that the mind is merely the product of the brain, and the dogma that memories are stored in brains.
  5. Observations establishing that the universe has many types of physical fine-tuning such as we would not expect under prevailing academic assumptions.
These are part of a massive body of evidence that defies and discredits the worldview prevailing among the professors of academia. But such professors do a very good job of keeping their classrooms and journals and textbooks as places where such evidence is not discussed. Professors exert a kind of de facto censorship in which such opposing evidence is excluded from the sheltered world of academia. The “no entry” symbols in my diagram illustrate this type of de facto censorship.

The evidence I just discussed gives rise to contrarian viewpoints that differ from the prevailing worldview held by academic professors. But there is little or no fair discussion of such contrarian viewpoints in the regimented literature and classroom presentations of academia. If there is any discussion of such viewpoints, an academic authority will typically make sure to label the contrarian thinkers with various defamatory or deprecatory labels designed to prevent anyone from taking their opinions seriously.

Academia produces a huge flow of research and literature. But the research and literature is largely what I call dogma-driven. Dogma-driven research is research designed to reassure professors that they are on the right track, and that their grand explanatory pretensions are justified. Examples include the following:

  • Innumerable papers presenting different variations of Guth's theory of primordial cosmic inflation, most of which are designed to bolster our confidence in such a theory.
  • Innumerable papers speculating about dark matter and dark energy, two things that have never been observed.
  • Countless papers trying to shore up the doctrine of common descent, by presenting 1001 different scenarios describing inheritance paths that life could have taken to progress through Darwinian evolution.
  • Thousands of brain scanning studies, designed to provide evidence for claims that the brain is the source of mental phenomena or the storage place of memories.
  • Thousands of papers presenting variations of the unverified theory of supersymmetry.

Theory-biased analysis occurs both in journals and in textbooks. The observational facts are all passed through the prism of existing theories, which are often based on smug achievement legends. There will be either no discussion of a vast number of observational facts conflicting with such theories, or the discussion will be very skimpy and jaundiced. Reasonable alternative explanations will not be discussed, or will be discussed in some skimpy deprecatory manner. 

The result of such research is all too often unimpressive. A significant fraction of all research findings reported in scientific journals cannot be reproduced. The problem was highlighted in a widely cited 2005 paper by John Ioannidis entitled, “Why Most Published Research Studies Are False.” 

A scientist named C. Glenn Begley and his colleagues tried to reproduce 53 published studies called “ground-breaking.” He asked the scientists who wrote the papers to help, by providing the exact materials to publish the results. Begley and his colleagues were only able to reproduce 6 of the 53 experiments. In 2011 Bayer reported similar results. They tried to reproduce 67 medical studies, and were only able to reproduce them 25 percent of the time.

But a reader of science news sites would hardly guess that such problems exist. Every week such sites seem to report a glorious march of progress, and make it sound as if wonderful breakthroughs are occurring almost every week.  The problem is that weak scientific research and analysis is uncritically trumpeted.

Part of the problem is university press offices, which nowadays are shameless in exaggerating the importance of research done at their university. They know the more some locally arising scientific research is hyped, the more glory and attendees will flow to their university.

A scientific paper reached the following conclusions, indicating a huge hype and exaggeration crisis both among the authors of scientific papers and the media that reports on such papers:

"Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference....Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study."

Exacerbating this problem is an economic situation in which there is a huge monetary motivation for hyping second-rate research, to make it sound like some glorious breakthrough. Web sites that report science news have ads on their pages. The more users click on a particular page, the more money the web site makes from ad revenue. Given such a situation, there is a huge motivation for web sites to hype science research.  You won't click on some headline of "Another doubtful animal study with too small a sample." But you may click on a misleading headline of, "Amazing study unveils the secret of how memory works."  And once you make such a click, some web site will make money from some ad on the page you clicked on. So the science news hype and exaggeration keeps flowing full blast. 

Almost every day in the science news you can find examples of hyped-up headlines that are inaccurate. The glaring example I find on today's ScienceDaily.com is a story with the extremely false headline "Strongest evidence yet that neutrinos explain how the universe exists." The long-standing matter-antimatter asymmetry mystery is that the Big Bang should have produced equal amounts of matter (protons and electrons) and antimatter (antiprotons and positrons), in incredible densities denser than a neutron star. Since matter and antimatter destroy each other upon contact, forming into only photons of energy,  the Big Bang should have left nothing but photons of energy.  This problem cannot at all be solved by any possible finding about neutrinos, mere "ghost particles" that are millions of times less massive than electrons, which are 1836 times less massive than protons. When you read the story you will find zero evidence justifying the headline. 

In many an area of science, researchers are mostly spinning their wheels, making little progress.  A hundred promising leads are not being followed up, because professors don't want to find out about what such leads suggest, which is often that the cherished tenets of such professors are dead wrong. Meanwhile there are endless poor papers inspired by dubious theories, because professors want to be reassured that they are on the right track, and that their beloved dogmas are correct. Much too little real explanatory progress is being made, but you would never know it from reading the "hype almost anything that moves" science press, which seems to trumpet almost any dubious "glorify our guys" press release coming from a university press office, while only very rarely applying critical scrutiny to such claims.

Saturday, March 21, 2020

Exhibit A Suggesting Scientists Don't Understand How a Brain Could Store a Memory

Many a scientist claims that human memories are stored in brains. But when asked to explain how it is that a brain could retrieve a memory, scientists go "round and round" in circles, producing unsubstantial circumlocutions that fail to provide any confidence that scientists understand such a thing.  I discussed such explanatory shortfalls in my 2019 post “Exhibit A Suggesting Scientists Don't Know How a Brain Could Retrieve a Memory,” and my 2020 post  "Exhibit B Suggesting Scientists Don't Know How a Brain Could Retrieve a Memory."  When they attempt to explain how a brain could store a memory, scientists give the same kind of unsubstantial and empty discussion, the kind of discussion that should fail to convince anyone that they have a real understanding of how a brain could do such a thing. 

An example of such a thing is an article that appeared in the online site of the major British newspaper The Guardian. The article by neuroscientist Dean Burnett was entitled "What happens in your brain when you make a memory?" Burnett follows a kind of standard formula followed by writers on this topic.  The rules are rather as follows:

(1) Attempt to persuade readers that you understand memory by talking about the difference between long-term memory and short-term memory.   Whenever such discussion occurs, it actually does nothing showing any understanding of a neural basis for memory, for such a discussion can occur based on purely phenomenal observations about how well people perform on different memory tasks. 

(2) Attempt to persuade readers that you understand memory by talking about the difference between episodic memory and conceptual memory. This again is something that can be discussed without any reference to the brain, so any such discussion doesn't do anything to establish some understanding of a neural basis for memory. 
(3) Make frequent use of the word "encoding," without actually presenting any theory of encoding.   Neuroscientists love to use the word "encoding" when discussing memory acquistion, as if they had some understanding of some system of encoding or translation by which episodic or conceptual memories could be translated into neural states or synapse states.  They do not have any such understanding. No neuroscientist has ever presented a credible, coherent, detailed theory of memory encoding, of how conceptual knowledge or episodic experiences could ever be translated into neural states or synapse states.  Any attempt to do such thing would cause you to become entangled in an ocean of difficulties.  
(4) Mention one or two parts of the brain, usually exaggerating their significance.  I'll give an example of this in a moment. 
(5) Talk dogmatically about synapses, creating the impression that memories are stored in them, without discussing their enormous instability and unsuitability as a place for storing memories that might last for decades. 

 Burnett pretty much follows such a customary set of rules. He uses the word "encoding" or "encode" four times, but fails to present any substantive explanation or idea as to how any human episodic or conceptual information could ever be encoded, in the sense of being translated into neural states. Burnett claims,  "The hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses." He provides no evidence to back up this claim. There are some important reasons for thinking that the claim cannot possibly be correct. 

One reason is that studies have shown that people with very heavy hippocampus damage can have a normal ability to acquire conceptual and learned information.  The paper here discussed three subjects who had lost about half of the matter in their hippocampi.  We read the following:


"All three patients are not only competent in speech and language but have learned to read, write, and spell. ...With regard to the acquisition of factual knowledge, which is another hallmark of semantic memory, the vocabulary, information, and comprehension subtests of the VIQ scale are among the best indices available, and here, too, all three patients obtained scores within the normal range (Table 2). A remarkable feature of Beth’s and Jon’s stores of semantic memories is that they were accumulated after these patients had incurred the damage to their hippocampi."

The same thing was found by the study here. A group of 18 subjects were studied, subjects with severe hippocampus damage. Some 28% to 62% of the hippocampi of these subjects were damaged or destroyed. The subjects had episodic memory problems, but "relatively preserved intelligence, language abilities, and academic attainments."  We are told, "In all but one of our cases, the patients...attended mainstream schools."  Could patients with such heavy hippocampus damage have normal academic achievements if it were true that "the hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses"?  Not at all. In a similar vein, the study here involving 17 rhesus monkeys found that "monkeys with hippocampal lesions showed no deficits in learning and later recognizing new scenes."

A study looked at memory performance in 140 patients who had undergone an operation called an amygdalohippocampectomy, which removes both the hippocampus and the amygdala. Table 1 of the study found that such an operation had no significant effect on nonverbal memory, causing a difference of less than 3%. Table 3 shows that most patients were unchanged in their verbal memory and nonverbal memory. More patients had a loss in memory than a gain, although about 13% had a gain in nonverbal memory. These results are not at all consistent with Burnett's claim that "the hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses."

There is a reason why it cannot be true that a new memory requires a formation of new synapses. The reason is that humans can form new memories instantly, but both the formation of a new synapse and the strengthening of a synapse require minutes of time.  If someone fires a bullet that passes near your head, you will instantly form a permanent new memory that you will remember the rest of your life.  The same thing will happen the moment you break your leg in a biking accident.  Claiming that memories require either the formation of new synapses or the strengthening of synapses is incompatible with a fact of human experience, that humans can form new memories instantly. 


If it were true that memories were stored by a strengthening of synapses, this would be a slow process. The only way in which a synapse can be strengthened is if proteins are added to it. We know that the synthesis of new proteins is a rather slow effect, requiring minutes of time. In addition, there would have to be some very complicated encoding going on if a memory was to be stored in synapses. The reality of newly-learned knowledge and new experience would somehow have to be encoded or translated into some brain state that would store this information. When we add up the time needed for this protein synthesis and the time needed for this encoding, we find that the theory of memory storage in brain synapses predicts that the acquisition of new memories should be a very slow affair, which can occur at only a tiny bandwidth, a speed which is like a mere trickle. But experiments show that we can actually acquire new memories at a speed more than 1000 times greater than such a tiny trickle.

One such experiment is the experiment described in the scientific paper “Visual long-term memory has a massive storage capacity for object details.” The experimenters showed some subjects 2500 images over the course of five and a half hours, and the subjects viewed each image for only three seconds. Then the subjects were tested in the following way described by the paper:

"Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high  (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images."

In this experiment, pairs like those shown below were used. A subject might be presented for 3 seconds with one of the two images in the pair, and then hours later be shown both images in the pair, and be asked which of the two was the one he saw.



Although the authors probably did not intend for their experiment to be any such thing, their experiment is a great experiment to disprove the prevailing dogma about memory storage in the brain. Let us imagine that memories were being stored in the brain by a process of synapse strengthening. Each time a memory was stored, it would involve the synthesis of new proteins (requiring minutes), and also the additional time (presumably requiring additional minutes) for an encoding effect in which knowledge or experienced was translated into neural states. If the brain stored memories in such a way, it could not possibly keep up with remembering most of thousands of images that appeared for only three seconds each.

There is another reason why it cannot be true that we remember things because "the hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses," as Burnett claims.  The reason is that synapses are too unstable to be a storage place for memories that can last for decades.  The proteins in synapses have short lifetimes, lasting for an average of no more than about two weeks.

A fairly recent paper on the lifetime of synapse proteins is the June 2018 paper “Local and global influences on protein turnover in neurons and glia.” The paper starts out by noting that one earlier 2010 study found that the average half-life of brain proteins was about 9 days, and that a 2013 study found that the average half-life of brain proteins was about 5 days. The study then notes in Figure 3 that the average half-life of a synapse protein is only about 5 days, and that all of the main types of brain proteins (such as nucleus, mitochondrion, etc.) have half-lives of less than 20 days. The synapses themselves do not last for more than a few years.  So synapses lack the stability that would have to exist if memories are to be stored for years.  Humans can reliably remember things for more than 50 years. Such a length of time is about 1000 times longer than the lifetime of proteins in synapses. 

Without providing any evidence for such a claim, Burnett teaches the widely taught idea that memories migrate from one part of the brain to another. He states the following:

"Newer memories, once consolidated, appear to reside in the hippocampus for a while. But as more memories are formed, the neurons that represent a specific memory migrate further into the cortex."

We have no understanding of how a neuron could represent a memory, no evidence that memories are written to any part of the brain, and no understanding of how any such thing as a writing of a memory could occur in neurons and synapses. We also have zero understanding of how a written memory could migrate from one place in a brain to another place, nor do we have any direct evidence that any such migration occurs.  But we do have an extremely strong reason for thinking that accurate memories could not possibly migrate from a hippocampus into the cortex. The reason has to do with the very low reliability of signal transmission in the cortex. 

A scientific paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower." 

Another paper concurs by also saying that there are two problems (unreliable synaptic transmission and a randomness in the signal strength when the transmission occurs):


"On average most synapses respond to only less than half of the presynaptic spikes, and if they respond, the amplitude of the postsynaptic current varies. This high degree of unreliability has been puzzling as it impairs information transmission."



So the transmission of information into the cortex must be extremely unreliable.  To imagine how unreliable such a transmission would be, with only a 10% chance of a nerve signal transmitting, imagine that you are trying to send an email to someone, but your email provider is so unreliable that there is only a 10% chance that any character that you type will be accurately transmitted.  You might send your friend an email saying, "Hi Joe, what do you say we have dinner at that new steak place that opened on 42nd Street?"   But the email your friend got would be unreadable gibberish, something like "Hwdsd ondSt?"  That's the type of information scrambling that would occur if memories were to migrate from the hippocampus into the cortex, given a cortex where there is only a 10% chance of any action potential (or nerve signal) transmitting.  



So if memories were migrating into our cortex, we would never be able to remember things accurately.  But humans have an astonishing capability for memorizing vast amounts of information with 100% accuracy. It is a fact that some Muslims accurately memorize every word of their holy book.  We also know that actors can accurately memorize each of the 1569 lines of the role of Hamlet, and that Wagnerian tenors can accurately memorize both the notes and the words of the extremely long parts of Siegfried and Tristan (the role of Siegried requires someone to sing on stage for most of four hours). 


Once we carefully ponder all the reasons for rejecting its main claims, and also carefully ponder the lack of any discussion of robust evidence for a brain storage of memory,  we can see that an article such as the Guardian article is a kind of Exhibit A that modern neuroscientists have no real understanding of how a brain could do any such thing as store a memory.  Nature never told us that brains store memories. It was merely neuroscientists who made such a claim, without good evidence. 

The article by Burnett is not a detailed scientific paper, but if we look at a typical scientific paper attempting to present evidence for memory storage in a brain, you will not find any robust evidence. A recent example is the 2019 paper "Changes of Synaptic Structures Associated with Learning, Memory and Diseases."  The paper fails to provide any solid evidence that synapse states have any causal relation with memory acquisition.  No clear message comes from findings such as "motor learning rapidly increases the formation and elimination of spines of L5 PyrNs in the mouse  primary motor cortex (M1), leading to a transient increase in spine number, which over days returns to the baseline," combined with other statements such as "another study showed that spine dynamics on L2/3 PyrNs are not affected by motor learning."  Anyone looking to find a relation between one effect and some other physical factor (in a small number of tries) will have perhaps a 25% chance of finding what looks like a correlation purely by chance.  For example, if I try to look for a relation between stock market declines and rainfall, I'll have perhaps a 25% chance of finding such an effect if I test on four random days. So we would expect that neuroscientists hoping to find some correlation between synapse activity and learning would find such a correlation in a certain fraction of the times they tried, purely by chance, even if synapses are not a storage place of learned information. Nowhere in this paper is there anything like an explanation of how a brain could store a memory, and when the paper authors confess that "the stability of memory and the dynamism of synapses remain to be reconciled," they basically admit that they have no answer to the objection that synapses are too unstable to be storing memories that last for decades. 

Friday, February 21, 2020

Fraud and Misconduct Are Not Very Rare in Biology

Without getting into the topic of outright fraud, we know of many common problems that afflict a sizable percentage of scientific papers. One is that it has become quite common for scientists to use titles for their papers announcing results or causal claims that are not actually justfied by any data in the papers. A scientific study found that 48% of scientific papers use "spin" in their abstracts. Another problem is that scientists may change their hypothesis after starting to gather data, a methodological sin that is called HARKing, which stands for Hypothesizing After Results are Known. An additional problem is that given a body of data that can be analyzed in very many ways, scientists may simply experiment with different methods of data analysis until one produces the result they are looking for. Still another problem is that scientists may use various techniques to adjust the data they collect, such as stopping data collection once they found some statistical result they are looking for, or arbitrarily excluding data points that create problems for whatever claim they are trying to show.  Then there is the fact that scientific papers are very often a mixture of observation and speculation, without the authors making clear which part is speculation.  Then there is the fact that through the use of heavy jargon, scientists can make the most groundless and fanciful speculation sound as if was something strongly rooted in fact, when it is no such thing. Then there is the fact that scientific research is often statistically underpowered, and very often involves sample sizes too small to be justifying any confidence in the results. 

All of these are lesser sins. But what about the far more egregious sin of outright researcher misconduct or fraud?  The scientists Bik, Casadevall and Fang attempted to find evidence of such misconduct by looking for problematic images in biology papers.  We can imagine various ways in which a scientific paper might have a problematic image or graph indicating researcher misconduct:

(1) A photo in a particular paper might be duplicated in a way that should not occur.  For example, if a paper is showing two different cells or cell groups in two different photos,  those two photos should not look absolutely identical, with exactly the same pixels. Similarly, brain scans of two different subjects should not look absolutely identical, nor should photos of two different research animals. 
(2) A photo in a particular paper that should be different from some other photo in that paper might be simply the first photo with one or more minor differences (comparable to submitting a photo of your sister, adjusted to have gray hair, and labeled as a photo of your mother). 
(3) A photo in a particular paper that should be original to that paper might be simply a duplicate of some photo that appeared in some previous paper by some other author, or a duplicate with minor changes.
(4) A photo in a particular paper might show evidence of being Photoshopped.  For example, there might be 10 areas of the photo that are exact copies of each other, with all the pixels being exactly the same. 
(5) A graph or diagram in a paper that should be original to that paper might be simply a duplicate of some graph or diagram that appeared in some previous paper by some other author. 
(6) A graph might have evidence of artificial manipulation, indicating it did not naturally arise from graphing software. For example, one of the bars on a bar graph might not be all the same color. 


research misconduct

There are quite a few other possibilites by which researcher misconduct could be identified by examining images, graphs or figures. Bik, Casadevall and Fang made an effort to find such problematic figures. In their paper "The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications," they report a large-scale problem.  They conclude, "The results demonstrate that problematic images are disturbingly common in the biomedical literature and may be found in approximately 1 out of every 25 published articles containing photographic image data."  

But there is a reason for thinking that the real percentage of research papers with problematic images or graphs is far greater than this figure of only 4%.  The reason is that the techniques used by Bik, Casadevall and Fang seem like rather inefficient techniques capable of finding only a fraction of the papers with problematic images or graphs.  They describe their technique as follows (20,621 papers were checked): 

"Figure panels containing line art, such as bar graphs or line graphs, were not included in the study. Images within the same paper were visually inspected for inappropriate duplications, repositioning, or possible manipulation (e.g., duplications of bands within the same blot). All papers were initially screened by one of the authors (E.M.B.). If a possible problematic image or set of images was detected, figures were further examined for evidence of image duplication or manipulation by using the Adjust Color tool in Preview software on an Apple iMac computer. No additional special imaging software was used. Supplementary figures were not part of the initial search but were examined in papers in which problems were found in images in the primary manuscript."

This seems like a rather inefficient technique which would find less than half of the evidence for researcher misconduct that might be present in photos, diagrams and graphs. For one thing, the technique ignored graphs and diagrams. Probably one of the biggest possibilites of misconduct is researchers creating artificially manipulated graphs not naturally arising from graphing software, or researchers simply stealing graphs from other scientific papers. For another thing, the technique used would only find cases in which a single paper showed evidence for image shenanigans. The technique would do nothing to find cases in which one paper was inappropriately using an image or graph that came from some other paper by different authors. Also, the technique ignored supplemental figures (unless a problem was found in the main figures). Such supplemental figures are often a signficant fraction of the total number of images and graphs in a scientific paper, and are often referenced in the text of a paper as supporting evidence. So they should receive the same scrutiny as the other images or figures in a paper. 

I can imagine a far more efficient technique for looking for misconduct related to imagery and graphs. Every photo, every diagram, every figure and every graph in every paper in  a very large set of papers on a topic (including supplemental figures) would be put into a database.  A computer program with access to that database would then run through all the images, looking for duplicates or near-duplicates in the images, as well as other evidence of researcher misconduct. Such a program might also make use of "reverse image search" capabilities available online.  Such a computer program crunching the image data could be combined with manual checks.  Such a technique would probably find twice as many problems.  Because the technique for detecting problematic images described by  Bik, Casadevall and Fang is a rather inefficient technique skipping half or more of its potential targets, we have reason to suspect that they have merely shown us the tip of the iceberg, and that the actual rate of problematic images and graphs (suggesting researcher misconduct) in biology papers is much greater than 4% -- perhaps 8% or 10%. 

A later paper ("Analysis and Correction of Inappropriate Image Duplication: the Molecular and Cellular Biology Experience") by Bik, Casadevall and Fang (along with Davis and Kullas) involved analysis of a different set of papers. The paper concluded that "as many as 35,000 papers in the literature are candidates for retraction due to inappropriate image duplication."  They found that 6% of the papers "contained inappropriately duplicated images." They reached this conclusion after examining a set of papers in the journal Molecular and Cellular Biology.  To reach this conclusion, they used the same rather inefficient method of their previous study I just cited. They state, "Papers were scanned using the same procedure as used in our prior study."  We can only wonder how many biology papers would be found to be "candidates for retraction" if a really efficient (partially computerized) method was used to search for the image problems, one using an image database and reverse image searching, and one checking not only photos but also graphs, and one also checking the supplemental figures in the papers.  Such a technique might easily find that 100,000 or more biology papers were candidates for retraction.

We should not be terribly surprised by such a situation. In modern academia there is relentless pressure for scientists to grind out papers at a high rate. There also seems to be relatively few quality checks on the papers submitted to scientific journals. Peer review serves largely as an ideological filter, to prevent the publication of papers that conflict with the cherished dogmas of the majority. There are no spot checks of papers submitted for publication, in which reviewers ask to see the source data or original lab notes or lab photographs produced in experiments.  The problematic papers found by the studies mentioned above managed to pass peer review despite glaring duplication errors, indicating that peer reviewers are not making much of an attempt to exclude fraud.  Given this misconduct problem and the items mentioned in my first paragraph, and given the frequently careless speech of so many biologists, in which they so often speak as if unproven claims or discredited claims are facts, it seems there is a significant credibility problem in academic biology. 

In an unsparing essay entitled "The Intellectual and Moral Decline in Academic Research," PhD Edward Archer states the following:

"My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields. Yet, more importantly, training in 'science' is now tantamount to grant-writing and learning how to obtain funding. Organized skepticism, critical thinking, and methodological rigor, if present at all, are afterthoughts....American universities often produce corrupt, incompetent, or scientifically meaningless research that endangers the public, confounds public policy, and diminishes our nation’s preparedness to meet future challenges....Universities and federal funding agencies lack accountability and often ignore fraud and misconduct. There are numerous examples in which universities refused to hold their faculty accountable until elected officials intervened, and even when found guilty, faculty researchers continued to receive tens of millions of taxpayers’ dollars. Those facts are an open secret: When anonymously surveyed, over 14 percent of researchers report that their colleagues commit fraud and 72 percent report other questionable practices....Retractions, misconduct, and harassment are only part of the decline. Incompetence is another....The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor. That failure means taxpayers are being misled by results that are non-reproducible or demonstrably false."

Justin T. Pickettt PhD has written a long illuminating post entitled "How Universities Cover Up Scientific Fraud." It states the following:

"I learned a hard lesson last year, after blowing the whistle on my coauthor, mentor and friend: not all universities can be trusted to investigate accusations of fraud, or even to follow their own misconduct policies. Then I found out how widespread the problem is: experts have been sounding the alarm for over thirty years. One in fifty scientists fakes research by fabricating or falsifying data....Claims that universities cover up fraud and even retaliate against whistleblowers are common....More than three decades ago, after spending years at the National Institutes of Health studying scientific fraud, Walter Stewart came to a similar conclusion. His research showed that fraud is widespread in science, that universities aren’t sympathetic to whistleblowers and that those who report fraudsters can expect only one thing: 'no matter what happens, apart from a miracle, nothing will happen.' ”

An Editor-in-Chief of the journal Molecular Brain has found evidence suggesting that a significant fraction of neuroscientists may not have the raw data backing up the claims in their scientific papers. He states the following:

"As an Editor-in-Chief of Molecular Brain, I have handled 180 manuscripts since early 2017 and  have made 41 editorial decisions categorized as 'Revise before review,' requesting that the authors provide raw data. Surprisingly, among those 41 manuscripts, 21 were withdrawn without providing raw data, indicating that requiring  raw data drove away more than half of the manuscripts. I rejected 19 out of the remaining 20 manuscripts because of insufficient raw data. Thus, more than 97% of the 41 manuscripts did not present the raw data supporting their results when requested by an editor, suggesting a possibility that the raw data did not exist from the beginning, at least in some portions of these cases....We really cannot know what percentage of those manuscripts have fabricated data....Approximately 53% of the 227 respondents from the life sciences field answered that they suspect more than two-thirds of the manuscripts that were withdrawn or did not provide sufficient raw data might have..fabricated the data."