Sunday, February 1, 2026

Neuroscientists Keep Peddling Explanatory Snake Oil

 In the nineteenth century a widely practiced scam was the sale of snake oil. A snake oil salesman would typically travel around from town to town, typically new towns in the western United States that had a shortage of doctors.  He would often travel in a horse-pulled wagon called a medicine wagon. The snake oil salesman would make all kinds of groundless claims about the medicinal value of his worthless bottles. After a good day's sale, the snake oil salesman would be off to the next town. By keeping on the road, he would avoid the problem of customers demanding their money back because the product failed to work. Below we see a newspaper ad placed by one of the traveling snake oil salesmen:

snake oil ad

Notice the sweeping claim in the ad: the claim that the seller could "heal all manner of disease." Typically the person making such a claim would be selling products of little or no medicinal value. 

As the Wild, Wild West of the United States got more and more tame and adequately filled by regular doctors, snake oil treatments fell into disrepute, and the very term "snake oil" became a term synonymous with cons and cheats. But it is easy to imagine how snake oil manufacturers could have caused snake oil to become very prestigious.  All that would have been needed was the establishment of Departments of Snake Oil Medicine in colleges and universities, populated by Professors of Snake Oil Medicine. 

Once some type of claim gets taught by some type of department in colleges and universities, the public starts thinking of the claim as respectable and well-founded. What happens is that a university or college has great prestige, and is regarded as a lofty teacher of truth and a storehouse of knowledge. So when some new claim gets institutionalized by the establishment of a university department or college department, people tend to think such a claim is well-established. If there were hundreds of Departments of Astrology in colleges and universities all over the country, people would tend to think that astrology is well-established, and that consulting your horoscope is a good way to judge your future. 

But how could some Department of Snake Oil Medicine ever produce research that would give some veneer or aura of scientific respectability to the claims of snake oil advocates? That would be relatively easy. A variety of bad research techniques could be employed. It would work rather like this:

(1) When snake oil advocates did studies that showed no medical effectiveness in people using snake oil, such studies would simply be filed away in the file drawers of scientists, and not submitted for publication. 

(2) Snake oil advocates would run very small studies, and by pure chance a certain number of them (maybe 5% or so) would seem to show marginal effectiveness. Such studies would be the ones submitted for publication in journals. 

(3) Noise-mining and cherry-picking could be heavily utilized. The case histories of thousands of snake oil users could be very carefully scanned, to look for cases in which some type of ailment (perhaps an infectious disease such as the flu) seemed to become less troubling at about the same time someone had drunk or applied snake oil. Such cases would be heavily promoted as proof of the wonderful effectiveness of snake oil. 

(4) Bad measurement techniques and poor analysis techniques could be used when evaluating someone's health, allowing a kind of see-what-you-are-hoping-to-see analysis. For example, when studying the effectiveness of snake oil in treating fevers, snake oil advocates might rely on dubious "rate how you feel on a scale of 1 to 10" survey answers, rather than much more reliable thermometer measures of a person's temperature. 

(5) Misleading visuals might be used, such as body maps showing in bright red regions of the body allegedly treatable by snake oil medicine. 

By the use of such techniques and many similar misleading and poor-practice techniques, Professors of Snake Oil Medicine could easily produce papers or articles that seemed to provide superficial evidence for the effectiveness of snake oil treatments, even though the treatments had no effectiveness. And if the Professors of Snake Oil Medicine were to get heavy funding (directly or indirectly) from snake oil manufacturers who made tons of money from selling snake oil, we can imagine that many university and college Departments of Snake Oil Medicine could stay well-funded. Of course, such professors would have a strong financial motive to produce results pleasing to their corporate sponsors. 

corporate-funded professor

 In a previous post I stated the rule below:

The rule of well-funded and highly motivated research communitiesalmost any large well-funded research community eagerly desiring to prove some particular claim can be expected to  occasionally produce superficially persuasive evidence in support of such a claim, even if the claim is untrue.  

Such a rule would help Professors of Snake Oil Medicine to be able to claim that they were producing some studies in support of their claims. What would also help very much would be if the Professors of Snake Oil Medicine were to succeed in enforcing taboos and if such professors were to succeed in demonizing, slandering and gaslighting all those who presented evidence against the effectiveness of snake oil.  If such professors somehow made it a taboo to do research discrediting the claims of snake oil proponents, that would be a huge element in helping Departments of Snake Oil Medicine to become well-established.

Now, you may think that the imaginary tale I describe above is one too hard to believe. But the truth is that there actually occurred something very much like what I have described above. What happened is that they established in colleges and universities Departments of Neuroscience dedicated to the propagation of the belief that minds are produced by brains, and that brains are the storage place of human memories. Such claims were very much explanatory snake oil. Although countless billions of dollars have been spent trying to prove such claims, they have never been established by robust evidence. To the contrary, research on brains has produced innumerable reasons for rejecting the claim that minds are produced by brains, and that brains are the storage place of human memories. Such reasons are discussed in the posts of this blog site. 

So how is it that our Departments of Neuroscience have stayed in business and had such high influence? How have Professors of Neuroscience got so many people to believe the unbelievable dogmas they teach? This occurred because such professors used techniques just like I described above. Some of the techniques that such professors have used are listed below:

  • Quick and dirty" experimental designs
  • Way-too-small study group sizes
  • Cherry-picked data subsets
  • Ignoring two hundred years of well-documented psychical research presenting evidence contrary to neuroscientist dogmas
  • Unreliable claims about fear or recall in rodents, produced by bad measurement methods such as "freezing behavior" estimations 
  • Citations of poor-quality papers
  • "Keep torturing the data until it confesses" tactics
  • Constant reiterations of dogmas disproven or discredited by the facts observed by neuroscientists themselves
  • Ignoring unusual case histories that conflict with "brains make minds" claims or "brains store memories" claims
  • "Lying with colors" fMRI studies containing misleading visuals
  • Lack of pre-registration
  • Weird programming loop contortions of data, in which investigators senselessly think they have the right to programmatically perform arbitrary convolutions on the original data and still claim their twisted output is "what the brain is telling us"
  • Professor pareidolia resembling "Jesus in my toast" claims
  • Ignoring physical shortfalls of all brains that contradict or discredit some hypothesis being tested
  • Unreliable and convoluted "spaghetti code" analysis
  • Lack of detailed blinding protocols
  • Ignoring observation failures
  • Poor reproducibility
  • HARKing 
  • Not publishing null results
  • P-hacking
  • Title or abstract claims not matching results
  • Lack of control subjects, or too few of them
  • Blending real data and artificial (fake) data, with the fake data called "simulated" as if something "simulated" was better than something fake
  • Unwarranted use of cell names and cell nicknames
  • Unwarranted assumptions of causal effects, with a massive failure to consider reasonable alternative explanations of causes
  • By a constant use of the passive voice rather than the active voice, a failure to document in most scientific papers the most basic facts needed to help police fraud, facts such as exactly who made an observation, the exact date when the observation was made, and where the observation was made. 

Today's professors of neuroscience do not get payments from snake oil manufacturers. But they often receive (directly or indirectly) very big financial benefits from pharmaceutical manufacturers and the manufacturers of biomedical devices, who may fund research or offer lucrative consulting fees for professors making claims that enhance the stock prices of such manufacturers.  

bribed neuroscientist


Just as I imagined Professors of Snake Oil Medicine creating some taboo against research challenging the effectiveness of snake oil, with such professors using techniques of gaslighting and slander to marginalize researchers producing such research, professors of neuroscience have used similar techniques to try to create some taboo against research challenging the "brains make minds" dogma. We have two hundred years of published research documenting the reality of spooky mental phenomena that cannot be explained by brains,  things such as ESP, clairvoyance, paranormal phenomena, apparition sightings, out-of-body experiences and near-death experiences. The professors of neuroscience have declared such research topics to be taboo, and have ignored all of the psychical research that defies their "brains make minds" dogma. Such professors have tried to gaslight, slander and marginalize respectable researchers producing such results that defy brains make minds" dogma.

The "smoke and mirrors" world of modern neuroscience resembles the "smoke and mirrors" shenanigans that would have gone on within Departments of Snake Oil Medicine if they had ever been established.  For a look at a recent example of some of this foolishness, read the recent "Mad in America" article here.  We read of a patient who would not give "informed consent" to a treatment risking her life, who was electrically shocked until she relented, with this being hailed as "restoring decision-making capacity," in some twisted mess of neurobabble. 

smoke and mirrors neuroscience

A recent article in The Atlantic is entitled "Science Is Drowning In AI Slop." We read that soon after the popular ChatGPT AI program was introduced a few years ago, there was a huge spike in submissions to science journals and science preprint servers. We can imagine how that worked:  scientists stuffing their papers with lots of AI-generated paragraphs and AI-generated charts. We read that some researchers who would rarely submit a paper to a journal are now submitting dozens per year. We read that scientists are running machine-learning algorithms on data, claiming to have produced some interesting outcome. The article calls this "a fraud template for AI researchers," noting "as long as the outcome isn't too interesting, few people, if any, will bother to vet it." 

I was for a long time a software developer, and I know how to read programming code. When I look at the programming code used for some of the low-quality "keep torturing the data until it confesses" neuroscience papers that I read, I will often see code that makes me think something along the lines of "no human would ever write junk this unreadable."  My guess is that neuroscientists are sometimes using AI-generated computer code to do black-box "analytics" on brain scan data and EEG data. I suspect that often both the code and the description of the code in the paper is AI-generated, and that often the author or authors do not even understand very well what the code is doing. The obscure output from such a thing is correctly described as AI slop. But a human peer reviewer may be unlikely to catch the nonsense, partially because (according to the article) much of peer review is now done by AI rather than humans.