"Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals, you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content."
Tuesday, March 29, 2022
Why the Academia Cyberspace Profit Complex Keeps Giving Misleading Brain Research Reports
Sunday, March 20, 2022
"Thousands of Participants Are Needed for Accurate Results," But Most Brain Scan Studies Don't Even Use Dozens
For many years neuroscientists have been claiming important results about brains and minds, after doing brain imaging experiments using very small sample sizes. For example, we may read headlines saying that some particular region of the brain is more active during some type of mental event, and the total number of subjects who had their brains scanned will usually be smaller than 15. A new press release from the University of Minnesota Twin Cities announces results which indicate that such small-sample correlation-seeking brain imaging experiments are utterly unreliable. The headline of the press release is "Brain studies show thousands of participants are needed for accurate results."
We read this:
"Scientists rely on brain-wide association studies to measure brain structure and function—using MRI brain scans—and link them to complex characteristics such as personality, behavior, cognition, neurological conditions and mental illness. New research published March 16, 2022 in Nature from the University of Minnesota and Washington University School of Medicine in St. Louis...shows that most published brain-wide association studies are performed with too few participants to yield reliable findings."
The abstract of the paper in the science journal Nature can be read here. The paper is entitled, "Reproducible brain-wide association studies require thousands of individuals."
The press release tells us this:
"The study used publicly available data sets—involving a total of nearly 50,000 participants—to analyze a range of sample sizes and found:
- Brain-wide association studies need thousands of individuals to achieve higher reproducibility. Typical brain-wide association studies enroll just a few dozen people.
- So-called 'underpowered' studies are susceptible to uncovering strong but misleading associations by chance while missing real but weaker associations.
- Routinely underpowered brain-wide association studies result in a surplus of strong yet irreproducible findings."
"To identify problems with brain-wide association studies, the research team began by accessing the three largest neuroimaging data sets: the Adolescent Brain Cognitive Development Study (11,874 participants), the Human Connectome Project (1,200 participants) and the UK Biobank (35,375 participants). Then, they analyzed the data sets for correlations between brain features and a range of demographic, cognitive, mental health and behavioral measures, using subsets of various sizes. Using separate subsets, they attempted to replicate any identified correlations. In total, they ran billions of analyses, supported by the MIDB Informatics Group and the powerful computing resources of the Minnesota Supercomputing Institute. The researchers found that brain-behavior correlations identified using a sample size of 25—the median sample size in published papers—usually failed to replicate in a separate sample. As the sample size grew into the thousands, correlations became more likely to be reproduced. Robust reproducibility is critical for today’s clinical research. Senior author Nico Dosenbach, MD, PhD, an associate professor of neurology at Washington University, says the findings reflect a systemic, structural problem with studies that are designed to find correlations between two complex things, such as the brain and behavior."
Monday, March 14, 2022
When They Get Data Suggesting Brains Don't Make Minds, They Repackage It As "Brains Make Minds"
The "brains make minds" dogma is so entrenched in academia that many scientists feel afraid to challenge it, on the grounds that becoming a heretic is not a good career move. What often happens is that scientists will get some observational result that is inconsistent with the dogma that brains make minds, and such scientists will try to repackage this result as a "brains make minds" result. Examples of this can be found in the discussion of humans who think very well and have good intelligence despite having lost half, most or almost all of their brains because of disease or surgery to stop severe seizures. Rather than listening to what nature is suggesting by such cases (that the brain is not the source of the mind), our scientists may try to repackage such results as something like "evidence of the amazing plasticity of the brain, which can work well even when most of it has been lost." Similarly, if someone claims your teeth produce your mind, and you lose most of your teeth, he may say, "Well, isn't that amazing: it requires just a few teeth for you to be smart!"
In today's science news, we have an example of such repackaging of results to fit the standard narrative (even when the results suggest that narrative is wrong). It is a news story entitled "Surprise! Complex Decision Making Found in Predatory Worms With Just 302 Neurons." No evidence has been produced that such decision-making occurs through neurons. We read, "Instead of looking at actual neurons and cell connections for signs of decision making, the team looked at the behavior of P. pacificus instead – specifically, how it chose to use its biting capabilities when confronted with different types of threat." We read about the worms taking "two different strategies" when biting, one involving "biting to devour" and the other involving "biting to deter." We read this:
"By observing where P. pacificus worms laid their eggs, and how their behavior changed when a bacterial food source was nearby, the scientists determined that bites on adult C. elegans were intended to drive them away – in other words, they weren't simply failed attempts to kill these competitors. While we're used to such decision making from vertebrates, it hasn't previously been clear that worms had the brainpower to proverbially weigh up the pros, cons, and consequences of particular actions in this way."
If we knew such worms produced such "complex decision making" by the action of neurons, would we then be entitled to say, "Complex decision making can arise from only 302 neurons"? No, not at all. Very many or most of the neurons of any organism are presumably dedicated to things such as muscle movement, sensory perception and autonomic function. We should presume that 90% of the neurons in such worms are tied up in such things. If you then wanted to claim that complex decision making came from the neurons of such worms, you would have to presume that a mere 30 or so neurons were producing such complex decisions.
Such a claim would be laughable. Humans have no understanding of how billions of neurons in a human brain could produce any such thing as thinking, understanding or decision making. To claim that complex decision making can come from only a very small number of neurons in a worm seems absurd, rather like thinking that someone with only a few dozen muscle cells could lift an air conditioner up above his head.
The writer of today's new story should have recognized that these results conflict with claims that minds are produced by brains. Instead, the results were repackaged to conform with the "brains make minds" dogma. So the beginning of the news story read like this:
"As scientists continue to discover more about the brain and how it works, it can help to know just how much brain matter is required to perform certain functions – and to be able to make complex decisions, it turns out just 302 neurons may be required."
See here for another example of complex thought from tiny animals (ravens). An article in Knowable Magazine suggests that tiny spiders are capable of complex thought. We read this:
"There is this general idea that probably spiders are too small, that you need some kind of a critical mass of brain tissue to be able to perform complex behaviors,' says arachnologist and evolutionary biologist Dimitar Dimitrov of the University Museum of Bergen in Norway. 'But I think spiders are one case where this general idea is challenged. Some small things are actually capable of doing very complex stuff.' Behaviors that can be described as 'cognitive,' as opposed to automatic responses, could be fairly common among spiders, says Dimitrov, coauthor of a study on spider diversity published in the 2021 Annual Review of Entomology."
In one test of intelligence, tiny mouse lemurs with brains 1/200 the size of chimpanzees did about as well as the chimpanzees We read this:
"The results of the new study show that despite their smaller brains lemurs' average cognitive performance in the tests of the PCTB was not fundamentally different from the performances of the other primate species. This is even true for mouse lemurs, which have brains about 200 times smaller than those of chimpanzees and orangutans."
This result is what we might expect under the hypothesis that brains do not make minds, but not at all what we would expect under the claim that brains make minds.
Tuesday, March 8, 2022
US Government Gives Us Fake News About Brains and Memory
Courtesy of a sub-branch of the United States government, we have in today's science news an utterly bogus headline as phony as a three-dollar bill. The headline is "Researchers uncover how the human brain separates, stores, and retrieves memories." The headline appears in a press release published by the National Institute of Neurological Disorders and Stroke, a branch of the National Institutes of Health (NIH), a branch of the US government.
Scientists have no actual understanding of how memories form or how a human being is able to retrieve a memory. They have never been able to discover any credible coding mechanism or translation mechanism by which any of the main forms of human memories could be translated into neural states or synapse states. Computers have read-write heads to store information in particular places on a disk. The brain has nothing like a write component that could be used to store information in some particular part of the brain, and has nothing like a read component that could be used to read information from some particular part of the brain. Computers have indexing systems and addressing systems that allow the instant retrieval of stored information. No such thing exists in the brain, which has no indexing system, no addressing, no coordinate system and no position notation system. So the instant recall of a memory (given a single word or phrase) would seem to be impossible if such a recall occurs by the reading of neurons or synapses. As discussed here, the extremely abundant levels of noise in the brain should make impossible both the accurate storage of learned information in the brai and the accurate retrieval of learned information from the brain. And the many typically-overlooked slowing factors in the brain (such as synaptic delays) should make it impossible for a brain to be responsible for memory retrieval that can occur instantly. Given the very short lifetimes of synaptic proteins (1000 times shorter than the longest length of time humans can remember things), and the high turnover of dendritic spines, no one has been able to come up with a credible theory of how brains could store memories that can last for 50 years. Nor has any person been able to explain how the sluggish chemical operations in a brain could instantly form a memory, something humans routinely do. Learned memory information has never been discovered by examining any type of neural tissue. For example, not one single bit of a person's memory can be retrieved from a corpse or from some tissue extracted during brain surgery.
The study in question ("Neurons detect cognitive boundaries to structure episodic memories in humans") involved 20 epilepsy patients who had electrodes planted in their heads, presumably for medical reasons such as determining the source of their seizures. The patients were shown some videos, and some electrode readings were taken of electrical signals from their brain. In the press release we read the following:
"The researchers recorded the brain activity of participants as they watched the videos, and they noticed two distinct groups of cells that responded to different types of boundaries by increasing their activity. One group, called 'boundary cells' became more active in response to either a soft or hard boundary. A second group, referred to as 'event cells' responded only to hard boundaries. This led to the theory that the creation of a new memory occurs when there is a peak in the activity of both boundary and event cells, which is something that only occurs following a hard boundary."
I do not have access to the "Neurons detect cognitive boundaries to structure episodic memories in humans" paper, which is behind a paywall. But you can read for free the preprint of an identical-sounding paper by the same lead author (Jie Zheng) involving the same 20 epilepsy patients, the same claims, the same brain region (the medial temporal lobe), and the same experimental method involving taking electrode readings of brain signals while patients were watching videos. That preprint ("Cognitive boundary signals in the human medial temporal lobe shape episodic memory representation") is not very impressive.
The extremely dubious method followed was to arbitrarily select hundreds of neurons for study, and to look for some tiny subset of neurons with electrical activity that could be correlated (merely in some fraction-of-a-second blip way) with memory activity of the human subjects when "boundary conditions" of videos were shown, using the nickname "boundary cells" or "event cells" for such neurons. The number of such "boundary cell" neurons found was reportedly 7%. The first giant problem is that given many billions of neurons in the human brain, there is no reason to think that the arbitrarily selected set of hundreds of neurons had any involvement at all in the storage or retrieval of a human memory. In fact, there is a very strong reason for thinking that such neurons almost certainly would have had no involvement at all in the storage or retrieval of a human memory: the fact that a few hundred is such a tiny fraction of many billions.
The second giant problem is that there is every reason to suspect that the small percentage of supposedly correlated neurons found (reportedly 7%) is just what we would expect to be finding by chance, when examining neurons with random electrical signals having nothing to do with memory. The authors claim that chance would have produced a result of only 2% rather than 7%. But since the paper did not involve any blinding protocol (such as should have been used for a study like this to be worthy of our attention), we should not be impressed by such a difference. We do not know whether the 7% is an over-estimate arising from scientists seeing what they wanted to see in a biased analysis occurring partially because of a failure to follow a blinding protocol which would have reduced analytic bias. Also, we do not know whether the 2% is an under-estimate arising from scientists under-estimating things so that they could report a result that they wanted to report, in a biased analysis occurring partially because of a failure to follow a blinding protocol which would have reduced analytic bias.
A similar state of affairs holds in regard to the report of the detection of cells calls "event cells." The authors claim to have found that 6% of the studied hundreds of cells had some fraction-of-a-second correlation characteristic allowing them to be classified as "event cells," and they claim that only 2% of cells would have such characteristics by chance. But since the authors failed to follow any blinding protocol, we cannot have confidence in either of these numbers.
Under the very unlikely scenario that some meaningful difference in neuron response has been detected here, there is no particular reason to think that it is some neural sign of memory formation or memory retrieval. There are any number of reasons why brain cells might respond differently while videos are being shown, most of which have nothing to do with learning or memory. For example, a different visual stimulus can produce a different neural response, as can a different muscle movement or a fleeting emotion. We are told that the "boundary conditions" in the watched videos (supposedly producing different responses in the so-called "boundary cells") were accompanied by "sharp visual input changes." So any difference in neural response might have been merely a difference related to different visual perceptions, not something having to do with memory.
In short, no robust evidence has been provided in this preprint that any cells were involved in memory formation or memory retrieval, and since the "Neurons detect cognitive boundaries to structure episodic memories in humans" paper by the same lead author seemed to be identical in all the main features, there is no reason to think that such a study provided any evidence for a brain involvement in memory formation or memory retrieval.
Here is an excerpt from the press release touting the "Neurons detect cognitive boundaries to structure episodic memories in humans" paper, one that uses a faulty line of reasoning:
"The researchers next looked at memory retrieval and how this process relates to the firing of boundary and event cells. They theorized that the brain uses boundary peaks as markers for 'skimming' over past memories, much in the way the key photos are used to identify events. When the brain finds a firing pattern that looks familiar, it 'opens' that event.
Two different memory tests designed to study this theory were used. In the first, the participants were shown a series of still images and were asked whether they were from a scene in the film clips they just watched. Study participants were more likely to remember images that occurred soon after a hard or soft boundary, which is when a new 'photo' or 'event' would have been created.
The second test involved showing pairs of images taken from film clips that they had just watched. The participants were then asked which of the two images had appeared first. It turned out that they had a much harder time choosing the correct image if the two occurred on different sides of a hard boundary, possibly because they had been placed in different 'events.'
These findings provide a look into how the human brain creates, stores, and accesses memories."
There is no justification for claiming that the experiments discussed in the quote above tell us anything about the brain. The experiments discussed in the quote above are psychology experiments involving only human mental performance, without any measurement of the brain. What we see here is a trick that materialists frequently use: use some experimental results that do not involve any brain reading or brain scanning or brain measurement, and then claim that such results tell you something about the brain. When experimental results merely tell us that humans perform in such-and-such a way, or merely tell us that minds perform in such-and-such a way, we have no warrant for saying that such results tell us that the brain is performing in such-and-such a way.
Not one single bit of robust evidence has been provided in the press release that any understanding has occurred as to how a brain could store or retrieve a memory, nor has any robust evidence been provided for the claim that brains store or retrieve memories. All of the old reasons for rejecting such claims remain as strong as ever.
In today's NIH press release we have an extremely untrue statement saying, "This work is transformative in how the researchers studied the way the human brain thinks." No, the study described is just another example of a dubious neuroscience research design like I have seen countless times before. The study was funded by the NIH's Brain Initiative, and the PR people of that project have before often groundlessly used the word "transformative" for meager research results. I quote from a previous post of mine discussing the lack of major progress made by the Brain Initiative:
Wednesday, March 2, 2022
No Solid Principle Justifies "Brains Make Minds" Thinking
In the posts on this blog, I have shown that the facts do not justify conventional claims that the brain is the source of the human mind, and claims that memories are stored in brains. But could there be some kind of general principle that justifies thinking that brains make minds? Let's look at some possible principles, and see how well they stand up to scrutiny.
One possible principle that could be evoked to try to justify "brains make minds" claims is a principle that physical effects must be explained by physical causes. But this is not a defensible principle to justify "brains make minds" thinking. For one thing, mental effects such as thinking and understanding are not physical effects. Secondly, it would seem that many physical effects are not caused by physical causes, but are instead caused by mental causes. If John becomes enraged at Joe, and punches Joe, that is not a physical cause causing a physical effect, but a mental effect causing a physical effect.
Another possible principle that could be evoked to try to justify "brains make minds" claims is a principle that mental effects must be explained by physical causes. But this is not a defensible principle to justify "brains make minds" thinking. Consider this case: John becomes very sad because his true love Mary has become very sad. This would seem to be a case of a mental effect being produced by another mental effect, and countless other examples of such a thing could be given. It would not seem to be true that mental effects must always be explained by physical causes.
Another possible principle that could be evoked to try to justify "brains make minds" claims is a principle that scientists must never explain things by imagining invisible causes. A person could evoke this principle, and then say, "So rather than evoking some invisible cause for things mental, we must think of a visible cause: the brain." But this is not a defensible principle to justify "brains make minds" thinking. The fact is that outside the world of neuroscience, scientists often evoke invisible causes to explain things.
To explain the movements of bodies in the solar system, scientists evoke a universal law of gravitation. Gravitation is very much an invisible cause. You can observe someone falling from gravity, but the force of gravitation is itself invisible. To give another example, cosmologists (scientists who study the universe as a whole) habitually evoke two never-observed invisible things as explanations: dark matter and dark energy. Such invisible and never-observed things are pillars in the explanation systems of cosmologists. So it simply isn't true that scientists must never explain things by imagining invisible causes. If neuroscientists were to stop telling us that our brains make our minds, and were to start teaching that our minds arise from some mysterious "mind source" external to our bodies, this would be nothing very different from what cosmologists have been doing for decades, by appealing to invisible never-measured dark matter and dark energy.
Another possible principle that could be evoked to try to justify "brains make minds" claims is the long-standing principle of Occam's Razor. This was originally stated as the principle that "entities should not be multiplied beyond necessity." One could appeal to the Occam's Razor principle when trying to justify a belief that brains make minds. The reasoning might go like this:
"If we imagine that a brain is the cause of all of the mind and the storage spot of memory, that is simpler than imagining some soul is involved. For if you imagine a soul, you must also imagine some soul-giver or a soul source; and then you are postulating two things, not just one (a brain). But it is better to avoid postulating multiple things if you can postulate only one thing. That's the long-standing principle of Occam's Razor."
This argument is fallacious because it misstates Occam's Razor. According to the wikipedia.org article on Occam's Razor, the principle is inaccurately paraphrased as the principle that "the simplest explanation is usually the best one." It is not a valid principle that we should always prefer the simpler explanation or the simplest explanation. For example, if we imagine atoms as being hard indivisible particles as some ancient thinkers did, that is simpler than imagining atoms as usually being structured of multiple electrons, protons and neutrons. But in this case the more complicated explanation postulating more things is the correct one.
Occam's Razor is the principle that "entities should not be multiplied beyond necessity," and that "beyond necessity" part is a crucial part of the principle. Occam's Razor is the principle is that we should not assume additional causal factors unless we need to do so. Below are some examples of correct and incorrect applications of Occam's Razor:
(1) A man was shot in the back when a rifle bullet tore into his flesh. Should we assume that two people pulled the trigger, or only one? You don't need two people to pull a trigger. So according to Occam's Razor, we should assume only one person pulled the trigger.
(2) A man was killed when he was simultaneously shot in the back and also struck by an arrow that hit him in the front. We cannot evoke Occam's Razor to say there was only a single killer. Here there is a necessity for postulating multiple causes. So it is quite consistent with Occam's Razor for us to assume there were two killers, one shooting from the front, and another shooting from the back.
In the case of the mind and the brain, there are multiple necessities for assuming that the mind arises from something beyond the brain. They include the very short lifetime of proteins in the brain (about 1000 times shorter than the longest length of time old people can remember things), the rapid turnover and high instability of dendritic spines, the failure of scientists to ever find the slightest bit of stored memory information when examining neural tissue, the existence of good and sometimes above-average intelligence in some people whose brains had been almost entirely replaced by watery fluid (such as the hydrocephalus patients of John Lorber), the lack of any indexing system or coordinate system or position notation system in the brain that might help to explain the wonder of instant memory recall, the good persistence of learned memories after surgical removal of half a brain to treat severe seizures, the ability of many "savant" subjects (such as Kim Peek and Derek Paravicini) with severe brain damage to perform astounding wonders of memory recall, the fact of very vivid and lucid human experience and human memory formation in near-death experiences occurring after the electrical shutdown of the brain following cardiac arrest, and the complete lack of anything in the brain that can credibly explain a neural writing of complex learned information, a neural reading of complex learned information, or a neural instant retrieval of learned information.
So you cannot credibly evoke Occam's Razor to defend a belief that the mind is merely a product of the brain. Such a principle only applies to discourage cases when multiple causes are evoked "beyond necessity." But for the reasons above we seem to have many a necessity for postulating some cause of the mind beyond the brain.
Another principle that could be evoked to try to justify "brains make minds" claims is the principle that every characteristic of something must be explained in terms of the internal components of that thing. Unfortunately, this principle is not a valid one, as the examples below show:
- The motion behavior of planet Earth is not at all explained purely by some internal components of our planet. The motion behavior of planet Earth through the solar system is caused mainly by things outside of planet Earth, such as the sun and the universal law of gravitation which causes the sun to have a gravitational influence on the motion of Earth.
- The temperature of planet Earth is not at all explained purely by some internal components of our planet. The temperature of our planet is mainly explained by an external influence: the heat that comes from the sun.
- A person's opinions and behavior are not at all explained purely by some internal components of his body. Such opinions and behavior are largely determined by factors (such as social influences) coming from beyond the person's body.
Let us consider a very interesting type of alleged consensus that I may call a "leader's new clothes" consensus. Let us imagine a small company of about 20 employees that has a weekly employee meeting every Monday morning. On one Monday morning after all the employees have gathered in a conference room for the meeting, the company's leader comes in wearing flashy new clothes that are both very ugly and ridiculous-looking. Immediately the leader says, "I just paid $900 for this new outfit -- raise your hand if you think I look great in these clothes."
Now if it is known that the leader is someone who can get angry and fire people for slight offenses, it is quite possible that all twenty of the employees might raise their hand in such a situation, even though not one single one of them believes that the leader looks good in his ugly new clothes. In such a case the "public consensus" is 100% different from the private consensus. A secret ballot would have revealed the discrepancy.
The point of this example is that appeals to some alleged public consensus are notoriously unreliable. So arguing from some alleged consensus of some group is a weak and unreliable form of reasoning. The only way to get a reliable measure of the opinion of people on something is to do a secret ballot, and there virtually never occurs secret ballots of scientists asking about their opinions on scientific matters. We have no idea of whether the private beliefs of scientists differ very much from the public facade they present. For example, we have no idea whether it is actually true that almost all scientists think your mind is merely the product of your brain. It could easily be that 35% of them doubt such a doctrine, but speak differently in public for the sake of "fitting in," avoiding "heresy trouble" and seeming to conform to the perceived norms of their social group.
The history of science shows many "consensus beliefs" that were later discarded. Less than a century ago, eugenics was once wildly popular in US colleges, but now stands in disrepute. It was once a reputed scientific consensus that homosexuality was a mental illness. Now anyone claiming that in a college would be condemned by his college superiors. To give another of many other examples I could cite, Semmelweis accumulated evidence that cases of a certain kind of deadly fever could be greatly reduced if physicians would simply wash their hands with an antiseptic solution, particularly after touching corpses. According to a wikipedia.org article on him, "Despite various publications of results where hand washing reduced mortality to below 1%, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community." Thousands died unnecessarily, because of the stubbornness of experts, who were too attached to long-standing myths and cherished fantasies such as the idea that physicians had special "healing hands" that would never be the source of death. The wikipedia article tells us, "At a conference of German physicians and natural scientists, most of the speakers rejected his doctrine, including the celebrated Rudolf Virchow, who was a scientist of the highest authority of his time." Decades later, it was found that Semmelweis was correct, and his recommendations were finally adopted. The wikipedia.org article notes, "The so-called Semmelweis reflex — a metaphor for a certain type of human behavior characterized by reflex-like rejection of new knowledge because it contradicts entrenched norms, beliefs, or paradigms — is named after Semmelweis, whose ideas were ridiculed and rejected by his contemporaries."
More recently, in the year 2020 we were told countless times in the mainstream press that there was a scientific consensus that COVID-19 had arisen through a purely natural process, spreading from some animals that had the virus before humans. This alleged scientific consensus held for only about a year, until 2021, when many scientists started to confess that we don't know whether COVID-19 did or did not arise from a lab leak. Below is from a Reuters article on a US government report on COVID-19 origins:
"The ODNI report said four U.S. spy agencies and a multi-agency body have 'low confidence' that COVID-19 originated with an infected animal or a related virus. But one agency said it had 'moderate confidence' that the first human COVID-19 infection most likely was the result of a laboratory accident, probably involving experimentation or animal handling by the Wuhan Institute of Virology."
Results such as this should shake our confidence in the idea that there is something compulsory about some alleged scientific consensus. People tend to think that today's scientists have got things right because they have "state-of-the-art" equipment. Centuries from now (armed with vastly more sophisticated tools) scientists may look back on today's scientists the way today's scientists look back on 17th-century scientists, and think things like, "I can't believe way back then they were trying to figure out the mind by using those silly MRI machines." Such scientists of the future may scorn today's community of neuroscientists, regarding it as a dysfunctional culture plagued by poor practices, overconfidence and hubris.
To put things concisely, social proof is no proof, and "follow the herd" does not necessarily lead you to the truth.