Tuesday, March 29, 2022

Why the Academia Cyberspace Profit Complex Keeps Giving Misleading Brain Research Reports

Why do so many untrue and misleading stories about brains and minds appear in the press? The answer is largely a financial one: because various parties profit from such misleading stories. Using the famous "follow the money" slogan in the main movie about the Watergate affair (All the President's Men), let us "follow the money" and see how various parties profit from misleading stories about brains and minds in the press. 

The interesting diagram below illustrates a profit complex that links academia and cyberspace (a word that means the same as the Internet). 

Bad Science Is Profitable

To understand this profit complex, you must first understand how modern scientists are judged by their peers and superiors in academia. There are two numbers by which scientists are judged: (1) the number of scientific articles that the scientist has written or co-written, called the paper count; (2) the number of other papers that have mentioned or cited one of the papers the scientist has written or co-written, called the citation count. If you are a scientist hoping for a promotion such as tenure or a higher salary, you very much want these numbers to be as high as possible. 

The desire to raise such numbers (for the benefit of a scientist) is very much a factor when a scientist designs an experiment. Given the choice between some "quick and dirty" experimental design that will be likely to produce some result that is either a quick and easy result or a positive result or a result that can be claimed as some important result, and some other design that involves some more stringent research method that is longer, harder, or less likely to result in a positive result or a result that can be claimed as important, a scientist who is very interested in increasing his paper count and his citation count will be more likely to choose the "quick and dirty" design.  Such "quick and dirty" designs will very often involve way-too-small sample sizes, in which fewer than 15 subjects are studied (often for studies in which many dozens, hundreds or thousands of subjects would be needed if you wanted to get a reliable result).  A scientific study found that research papers that failed to replicate were on average 153 times more likely to be cited than papers describing research that replicated, stating this: "papers that replicate are cited 153 times less, on average, than papers that do not." Such failing-to-replicate studies typically involve shoddy "quick and dirty" experimental designs. 

Nowadays science journals have a tendency called "publication bias," which is a tendency to publish papers reporting positive results and reject papers reporting null or negative results.  When a scientist does an experiment that produces a null or negative result, and is not able to get a journal to publish the paper, the scientist's paper count is not increased, and the effort does nothing to advance the scientist's career. So scientists will avoid very careful and stringent designs less likely to result in a paper reporting a positive result, and will be more likely to create "quick and dirty" designs more likely to result in a positive result and more likely to result in a positive result that can be produced more quickly.  The quicker the experiment can be done, the more quickly can the scientist's paper count be increased. 

After creating this design, some observations are produced. Desiring to report some positive result and ideally some important-sounding result, scientists will tend to filter or segregate the observations to produce some subset that is more favorable to the reporting of a positive and interesting-sounding result. Sometimes this process can be described as cherry-picking, and other times this process is something rather along the lines of "keep slicing and dicing the data until it gives what is wanted" or "keep torturing the data until it confesses."  There are 101 reasons that can be given for excluding some data points and keeping other data points. There are also hundreds of statistical methods that can be used to massage and filter data until you are left with more favorable results.  In this analysis of data, scientists will have a motivation not to use blind analysis techniques that minimize the chance of biased analysis in which scientists report seeing what they want to see. 

After such analysis is completed, there comes the writing of a scientific paper.  When writing up a scientific paper, scientists are very motivated to describe the research as showing some positive result, even if the research has mainly or entirely produced a negative or null result. This is because scientists want to increase their paper count (the number of published papers they have authored or co-authored); and given publication bias in which journals tend to reject papers reporting only negative results, a paper reporting a negative result may be unlikely to be published. Scientists will also be very motivated to report getting some important result. The more that a scientist tends to claim that some important result was produced by the research, the more likely will be the publication of the paper. Also, the more important the result that is claimed, the more likely the paper will be to be cited by other papers. Such citations are extremely important to scientists, as scientists are judged not just by their paper count (the number of papers they have written), but also by their citation count (the number of times such papers have been cited). 

It very often happens that in writing up papers describing their research, scientists make claims that are misleading, exaggerated or just plain false. At a blog entitled "Survival Blog for Scientists" and subtitled "How to Become a Leading Scientist," a blog that tells us  "contributors are scientists in various stages of their career," we have an explanation of why so many science papers have inaccurate titles:

"Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals,  you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content."

scientist citation counts
Is this how scientists are trained?

A neuroscientist makes this confession:

"This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce 'better' results, and sometimes even commit fraud in order to impress those all-important gatekeepers."

After a scientific paper has been written up and published, it is announced with a press release issued by the main academic institution involved in the research. Nowadays the press releases of universities and colleges are notorious for making sensationalized claims that are not warranted by anything discovered in the research being discovered. Often a tentative claim made in a scientific paper (basically a "perhaps" or a "maybe") will be stated as if it is was simply a discovery of a definite fact.  Other times a university press release will make some important-sounding claim that was never made in the scientific paper writing up the research.  An example was that when  there appeared a scientific paper merely claiming that "Regional synapse gain and loss accompany memory formation in larval zebrafish," there appeared a great number of press stories repeating the headline of a press release claiming that the formation of a memory had been observed (a claim not made in the paper).  We have every reason to believe that synapse gains and losses occur  continually in the human body, regardless of whether some new memory is forming. 

Authorship anonymity is a large factor that facilitates the appearance of misleading university and college press releases.  Nowadays university and college press releases typically appear without any person listed as the author. So when a lie occurs (as it very often does), you can never point the figure and identify one particular person who was lying.  When PR men at universities are thinking to themselves "no one will blame me specifically if the press release has an error," they will feel more free to say misleading and untrue things that make unimpressive research sound important.  We should always hold every single scientist involved in a scientific paper responsible and accountable for every untruth that appears in a scientific paper they co-authored and also ever untruth that appears in the university press release announcing the paper, unless that scientist has publicly protested the misstatement. 
 
Misleading press releases produce an indirect financial benefit for the colleges and universities that release them.  When there occurs untrue announcements of important research results, such press releases make the college and university sound like some place where important research is being done. The more such press releases appear, the more people will think that the college or university is worth the very high tuition fees it charges. 

Judging from the quote below, it seems that science journalists often look down on the writers of university and college press releases, even though such science journalists very often uncritically parrot the claims of such people.  In an Undark.org article we read this:

"Still, there are young science journalists who say they would rather be poor than write a press release. Kristin Hugo, for example, a 26-year-old graduate of Boston University’s science journalism program, refuses to step into a communications role with an institution, nonprofit or government agency.  'I’ve been lucky enough that I haven’t had to compromise my integrity. I really believe in being non-biased and non-partisan,' she says. 'I really, really, really want to continue that. I wouldn’t necessarily begrudge someone for going into [public relations] because there’s money in that, but I’d really like to stay out of it.' "

Misleading press releases also help to sustain cyberspace profit systems outside of a college or university. Such press releases are repeated (often with further exaggerations and misstatements) by a host of web sites offering clickbait headlines leading to web pages containing ads. The more people click on these clickbait headlines, the more page views there are for pages containing ads. The more people view those pages, the more advertising revenue the web sites get. 

So web sites giving science news stories have a very large financial incentive to produce exaggerated or untrue headlines that users will be more likely to click on.  If the headline on a web page truthfully says, "Another Junk Science Brain-Scanning Result," almost no one will click on the headline to go to the page with the story containing ads. But if the headline untruthfully says, "Breakthrough Study Reveals the Secret of Memory," then thousands of people may click on the headline, producing many pages views of the story the link leads to, and much more advertising revenue. 

The web sites are one profit center benefiting from poor and misleading science journalism that exaggerates or misrepresents unimpressive research. Another profit center is the science journalists themselves. Most science journalist do not work on some salary basis in which they are paid the same annual salary regardless of what they write. Instead most science journalists work on a per-article basis, earning about $1 per word for an article in a print magazine such as Discover Magazine, or about $300 per article for an online article. Such journalists tend to pitch their stories to editors. The more sensational sounding the story, and the more exciting the claims made, the more likely the story will be to get published.  An article that applies critical scrutiny to some impressive-sounding press release claim will be unlikely to be published.  By uncritically parroting unfounded but exciting-sounding claims in university and college press releases,  science journalists help to fatten their own wallets.  Often science journalists will imaginatively add their own unwarranted claims and unjustified spin about some research, hoping to further increase their chances of receiving fees for writing exciting-sounding news stories.  In general, science journalists getting paid by word or by article are often very unreliable sources of information.  

To "follow the money" all the way, we must go back to the scientists who originally chose "quick and dirty" designs, and who may have misstated the implications and findings coming from their research. What is the result when "quick and dirty" experiment designs are chosen? The result is that the paper count (the number of published papers) of a scientist will increase more quickly. What is the result when scientists misstate or exaggerate what their observations show or imply, making their research sound important when it is not? The result is a greater number of citations of their papers by other scientists. The very important "citation count" of a scientist will increase.  What is the financial result when a scientist has piled up a high paper count and a high citation count? That scientist will be more likely to get promoted, more likely to get the tenure that gives him a lifetime job, more likely to get a higher salary, more likely to get a lucrative book deal with a major publisher, and so forth. 

What we have is an infrastructure that all over the place incentivizes bad agents who mislead and misinform, as long as such persons mislead and misinform in some way that produces exciting-sounding results that fit in with popular academia belief systems.  Given such an infrastructure, you should not be surprised to hear that today's cognitive neuroscience is a house of cards that mostly rests on an illusory foundation. Most of the things that neuroscientists claim have been established by cognitive neuroscientists have not actually been established by them at all. Most of the more important-sounding claims made in the neuroscience news stories of recent years are claims lacking any solid foundation in observations. Junk science flourishes, because there are so many people in so many different places who profit from junk science. 

Sunday, March 20, 2022

"Thousands of Participants Are Needed for Accurate Results," But Most Brain Scan Studies Don't Even Use Dozens

For many years neuroscientists have been claiming important results about brains and minds, after doing brain imaging experiments using very small sample sizes.  For example, we may read headlines saying that some particular region of the brain is more active during some type of mental event,  and the total number of subjects who had their brains scanned will usually be smaller than 15. A new press release from the University of Minnesota Twin Cities announces results which indicate that such small-sample correlation-seeking brain imaging experiments are utterly unreliable.  The headline of the press release is "Brain studies show thousands of participants are needed for accurate results."

We read this:

"Scientists rely on brain-wide association studies to measure brain structure and function—using MRI brain scans—and link them to complex characteristics such as personality, behavior, cognition, neurological conditions and mental illness. New research published March 16, 2022 in Nature from the University of Minnesota and Washington University School of Medicine in St. Louis...shows that most published brain-wide association studies are performed with too few participants to yield reliable findings."

The abstract of the paper in the science journal Nature can be read here. The paper is entitled, "Reproducible brain-wide association studies require thousands of individuals." 

The press release tells us this:

"The study used publicly available data sets—involving a total of nearly 50,000 participants—to analyze a range of sample sizes and found:

  • Brain-wide association studies need thousands of individuals to achieve higher reproducibility. Typical brain-wide association studies enroll just a few dozen people.
  • So-called 'underpowered' studies are susceptible to uncovering strong but misleading associations by chance while missing real but weaker associations. 
  • Routinely underpowered brain-wide association studies result in a surplus of strong yet irreproducible findings."
The claim that a typical brain scanning experimental study uses "a few dozen" people is probably an overestimate. Brain imaging studies touted in the press seem to typically involve fewer than 15 subjects. 

The press release tells us that the conclusions above are based on some very heavy number crunching using databases that store brain scans of a large number of people, including in many cases data on what they were doing or thinking while being scanned, what kind of mental characteristics or health history the people had, and what kind of genes the people had.  The largest such database was the UK Biobank, which according to page 5 of the document here includes "resting-state functional MRI measures changes in blood oxygenation associated with intrinsic brain activity (i.e., in the absence of an explicit task or sensory stimulus)," as well as "task-functional MRI" which "uses the same measurement technique as resting-state fMRI, while the subject performs a particular task or experiences a sensory stimulus." (The task was mainly something called the Hariri faces/shapes “emotion” task.)  Another large database used was a Human Connectome Project database including "task-evoked fMRI" brain scans of people while they were doing things involving working memory, gambling, language, social cognition, relational processing and emotional processing (as mentioned on page 36 of the document here).  Another large database used was an Adolescent Brain Cognitive Development (ABCD) database that included fMRI scans while subjects performed tasks such as a Monetary Incentive Delay task. a Stop Signal task and an "n-back" or "nBack" task (as described here). 

In the press release we read this:

"To identify problems with brain-wide association studies, the research team began by accessing the three largest neuroimaging data sets: the Adolescent Brain Cognitive Development Study (11,874 participants), the Human Connectome Project (1,200 participants) and the UK Biobank (35,375 participants). Then, they analyzed the data sets for correlations between brain features and a range of demographic, cognitive, mental health and behavioral measures, using subsets of various sizes. Using separate subsets, they attempted to replicate any identified correlations. In total, they ran billions of analyses, supported by the MIDB Informatics Group and the powerful computing resources of the Minnesota Supercomputing InstituteThe researchers found that brain-behavior correlations identified using a sample size of 25—the median sample size in published papers—usually failed to replicate in a separate sample.  As the sample size grew into the thousands, correlations became more likely to be reproduced. Robust reproducibility is critical for today’s clinical research. Senior author Nico Dosenbach, MD, PhD, an associate professor of neurology at Washington University, says the findings reflect a systemic, structural problem with studies that are designed to find correlations between two complex things, such as the brain and behavior."

What this study very strongly indicates is that the vast majority of brain imaging studies trying to correlate brains and mental states or mental activity have misled us by producing false alarms. The study indicates that such brain imaging studies have not merely been guilty of some slight shortfall, but have been guilty of a hundred-fold shortfall (the difference between about 20 and "thousands" being a difference of a hundred times).  It's as bad as if someone told you he produced a score of 1000 on his SAT test, but really only produced a score of 10. 

The study described above was led by neuroscientist Scott Marek. An article on the study in the journal Nature says this:

“ 'There’s a lot of investigators who have committed their careers to doing the kind of science that this paper says is basically junk,' says Russell Poldrack, a cognitive neuroscientist at Stanford University in California, who was one of the paper’s peer reviewers. 'It really forces a rethink.' ”

The New Scientist article on the Marek study is behind a paywall, but at least I can show its headline:

critique of brain scanning

For many years we have been scammed and the US federal government has been scammed by neuroscientists doing ridiculously low-powered brain imaging studies looking for correlations between brains and minds.  For many years our experimental neuroscientists doing small-sample brain imaging studies (looking for correlations between brain states and mental states) have been playing a game of "sham, scam, thank you Sam," the Sam being Uncle Sam who provided the dollars for such worthless studies producing only false alarms. This is a racket, but since it is a nice little source of dishonest income and easy work for professors, the racket will probably long continue. 

The US government seems to be incredibly poor at recognizing bad performance by biology authorities.  In the New York Times there was recently an opinion article with the headline "How Millions of Lives Might Have Been Saved from COVID-19." Without naming any names of the bumbling officials guilty of the bungled US response to COVID-19, we get some startling comparisons between competent responses in small countries and incompetent responses in the US. For example, we are told that Taiwan has suffered only 853 COVID-19 deaths, and that "if the United States had suffered a similar death rate, we would have lost about 12,000 people, instead of nearly a million."  Because the US government seems to be extremely poor at recognizing bad performance by biology authorities. we will probably continue to see the "sham, scam, thank you Sam" researchers bilking the government by doing worthless federally-funded small-sample brain imaging studies producing only  false alarms. 

brain imaging experiments

One of the quotes above tells us that correlations reported with a sample size of 25 "usually failed to replicate in a separate sample," but that "as the sample size grew into the thousands, correlations became more likely to be reproduced." Does this mean that strong correlations were found between brains and cognitive activity or cognitive states as long as you used samples of thousands? No. The Nature article on the Marek study tells us this:

"Researchers measure correlation strength using a metric called r, for which a value of 1 means a perfect correlation and 0 none at all. The strongest reliable correlations Marek and Dosenbach’s team found had an r of 0.16, and the median was 0.01."  

So even when data on thousands of subjects was used, no strong or medium correlations were found, and the median correlation was a negligible 0.01.  A medium-strength correlation has an r of about .5, and a strong correlation has an r of about .7.  The results discussed above are consistent with the idea that the brain is not the source of the human mind, and is not the storage place of human memories.  Under such an idea, we would expect there to be no strong correlations between brain states and unemotional mental activity such as calm thinking or calm recall. 

Monday, March 14, 2022

When They Get Data Suggesting Brains Don't Make Minds, They Repackage It As "Brains Make Minds"

The "brains make minds" dogma is so entrenched in academia that many scientists feel afraid to challenge it, on the grounds that becoming a heretic is not a good career move.  What often happens is that scientists will get some observational result that is inconsistent with the dogma that brains make minds, and such scientists will try to repackage this result as a "brains make minds" result.  Examples of this can be found in the discussion of humans who think very well and have good intelligence despite having lost half, most or almost all of their brains because of disease or surgery to stop severe seizures. Rather than listening to what nature is suggesting by such cases (that the brain is not the source of the mind), our scientists may try to repackage such results as something like "evidence of the amazing plasticity of the brain, which can work well even when most of it has been lost."  Similarly, if someone claims your teeth produce your mind, and you lose most of your teeth, he may say, "Well, isn't that amazing: it requires just a few teeth for you to be smart!" 

In today's science news, we have an example of such repackaging of results to fit the standard narrative (even when the results suggest that narrative is wrong). It is a news story entitled "Surprise! Complex Decision Making Found in Predatory Worms With Just 302 Neurons."  No evidence has been produced that such decision-making occurs through neurons. We read, "Instead of looking at actual neurons and cell connections for signs of decision making, the team looked at the behavior of P. pacificus instead – specifically, how it chose to use its biting capabilities when confronted with different types of threat."  We read about the worms taking "two different strategies" when biting, one involving "biting to devour" and the other involving "biting to deter."  We read this:

"By observing where P. pacificus worms laid their eggs, and how their behavior changed when a bacterial food source was nearby, the scientists determined that bites on adult C. elegans were intended to drive them away – in other words, they weren't simply failed attempts to kill these competitors. While we're used to such decision making from vertebrates, it hasn't previously been clear that worms had the brainpower to proverbially weigh up the pros, cons, and consequences of particular actions in this way."

If we knew such worms produced such "complex decision making" by the action of neurons, would we then be entitled to say, "Complex decision making can arise from only 302 neurons"?  No, not at all. Very many or most of the neurons of any organism are presumably dedicated to things such as muscle movement, sensory perception and autonomic function. We should presume that 90% of the neurons in such worms are tied up in such things. If you then wanted to claim that complex decision making came from the neurons of such worms, you would have to presume that a mere 30 or so neurons were producing such complex decisions. 

Such a claim would be laughable. Humans have no understanding of how billions of neurons in a human brain could produce any such thing as thinking, understanding or decision making. To claim that complex decision making can come from only a very small number of neurons in a worm seems absurd, rather like thinking that someone with only a few dozen muscle cells could lift an air conditioner up above his head.

The writer of today's new story should have recognized that these results conflict with claims that minds are produced by brains. Instead, the results were repackaged to conform with the "brains make minds" dogma. So the beginning of the news story read like this:

"As scientists continue to discover more about the brain and how it works, it can help to know just how much brain matter is required to perform certain functions – and to be able to make complex decisions, it turns out just 302 neurons may be required."

See here for another example of complex thought from tiny animals (ravens). An article in Knowable Magazine suggests that tiny spiders are capable of complex thought. We read this:

"There is this general idea that probably spiders are too small, that you need some kind of a critical mass of brain tissue to be able to perform complex behaviors,' says arachnologist and evolutionary biologist Dimitar Dimitrov of the University Museum of Bergen in Norway. 'But I think spiders are one case where this general idea is challenged. Some small things are actually capable of doing very complex stuff.'  Behaviors that can be described as 'cognitive,' as opposed to automatic responses, could be fairly common among spiders, says Dimitrov, coauthor of a study on spider diversity published in the 2021 Annual Review of Entomology."

In one test of intelligence, tiny mouse lemurs with brains 1/200 the size of chimpanzees did about as well as the chimpanzees  We read this:

"The results of the new study show that despite their smaller brains lemurs' average cognitive performance in the tests of the PCTB was not fundamentally different from the performances of the other primate species. This is even true for mouse lemurs, which have brains about 200 times smaller than those of chimpanzees and orangutans."

This result is what we might expect under the hypothesis that brains do not make minds, but not at all what we would expect under the claim that brains make minds. 

neuron problems

Tuesday, March 8, 2022

US Government Gives Us Fake News About Brains and Memory

Courtesy of a sub-branch of the United States government, we have in today's science news an utterly bogus headline as phony as a three-dollar bill.  The headline is "Researchers uncover how the human brain separates, stores, and retrieves memories." The headline appears in a press release published by the National Institute of Neurological Disorders and Stroke, a branch of the National Institutes of Health (NIH), a branch of the US government.  

Scientists have no actual understanding of how memories form or how a human being is able to retrieve a memory. They have never been able to discover any credible coding mechanism or translation mechanism by which any of the main forms of human memories could be translated into neural states or synapse states. Computers have read-write heads to store information in particular places on a disk. The brain has nothing like a write component that could be used to store information in some particular part of the brain, and has nothing like a read component that could be used to read information from some particular part of the brain.  Computers have indexing systems and addressing systems that allow the instant retrieval of stored information. No such thing exists in the brain, which has no indexing system, no addressing, no coordinate system and no position notation system.  So the instant recall of a memory (given a single word or phrase) would seem to be impossible if such a recall occurs by the reading of neurons or synapses.  As discussed here, the extremely abundant levels of noise in the brain should make impossible both the accurate storage of learned information in the brai and the accurate retrieval of learned information from the brain.  And the many typically-overlooked slowing factors in the brain (such as synaptic delays) should make it impossible for a brain to be responsible for memory retrieval that can occur instantly.  Given the very short lifetimes of synaptic proteins (1000 times shorter than the longest length of time humans can remember things), and the high turnover of dendritic spines, no one has been able to come up with a credible theory of how brains could store memories that can last for 50 years.  Nor has any person been able to explain how the sluggish chemical operations in a brain could instantly form a memory, something humans routinely do. Learned memory information has never been discovered by examining any type of neural tissue. For example, not one single bit of a person's memory can be retrieved from a corpse or from some tissue extracted during brain surgery. 

The study in question ("Neurons detect cognitive boundaries to structure episodic memories in humans") involved 20 epilepsy patients who had electrodes planted in their heads, presumably for medical reasons such as determining the source of their seizures.  The patients were shown some videos, and some electrode readings were taken of electrical signals from their brain. In the press release we read the following:

"The researchers recorded the brain activity of participants as they watched the videos, and they noticed two distinct groups of cells that responded to different types of boundaries by increasing their activity. One group, called 'boundary cells' became more active in response to either a soft or hard boundary. A second group, referred to as 'event cells' responded only to hard boundaries. This led to the theory that the creation of a new memory occurs when there is a peak in the activity of both boundary and event cells, which is something that only occurs following a hard boundary."

I do not have access to the "Neurons detect cognitive boundaries to structure episodic memories in humans" paper, which is behind a paywall. But you can read for free the preprint of an identical-sounding paper by the same lead author (Jie Zheng) involving the same 20 epilepsy patients, the same claims, the same brain region (the medial temporal lobe), and the same experimental method involving taking electrode readings of brain signals while patients were watching videos.  That preprint ("Cognitive boundary signals in the human medial temporal lobe shape episodic memory representation") is not very impressive. 

The extremely dubious method followed was to arbitrarily select hundreds of neurons for study, and to look for some tiny subset of neurons with electrical activity that could be correlated (merely in some fraction-of-a-second blip way) with memory activity of the human subjects when "boundary conditions" of videos were shown, using the nickname "boundary cells" or "event cells" for such neurons.  The number of such "boundary cell" neurons found was reportedly 7%.  The first giant problem is that given many billions of neurons in the human brain, there is no reason to think that the arbitrarily selected set of hundreds of neurons had any involvement at all in the storage or retrieval of a human memory. In fact, there is a very strong reason for thinking that such neurons almost certainly would have had no involvement at all in the storage or retrieval of a human memory: the fact that a few hundred is such a tiny fraction of many billions. 

The second giant problem is that there is every reason to suspect that the small percentage of supposedly correlated neurons found (reportedly 7%) is just what we would expect to be finding by chance, when examining neurons with random electrical signals having nothing to do with memory.  The authors claim that chance would have produced a result of only 2% rather than 7%. But since the paper did not involve any blinding protocol (such as should have been used for a study like this to be worthy of our attention), we should not be impressed by such a difference.  We do not know whether the 7% is an over-estimate arising from scientists seeing what they wanted to see in a biased analysis occurring partially because of a failure to follow a blinding protocol which would have reduced analytic bias.  Also, we do not know whether the 2% is an under-estimate arising from scientists under-estimating things so that they could report a result that they wanted to report, in a biased analysis occurring partially because of a failure to follow a blinding protocol which would have reduced analytic bias.  

A similar state of affairs holds in regard to the report of the detection of cells calls "event cells."  The authors claim to have found that 6% of the studied hundreds of cells had some fraction-of-a-second correlation characteristic allowing them to be classified as "event cells," and they claim that only 2%  of cells would have such characteristics by chance.  But since the authors failed to follow any blinding protocol, we cannot have confidence in either of these numbers.  

Under the very unlikely scenario that some meaningful difference in neuron response has been detected here, there is no particular reason to think that it is some neural sign of memory formation or memory retrieval. There are any number of reasons why brain cells might respond differently while videos are being shown, most of which have nothing to do with learning or memory.  For example, a different visual stimulus can produce a different neural response, as can a different muscle movement or a fleeting emotion.  We are told that the "boundary conditions" in the watched videos (supposedly producing different responses in the so-called "boundary cells") were accompanied by "sharp visual input changes." So any difference in neural response might have been merely a difference related to different visual perceptions, not something having to do with memory. 

In short, no robust evidence has been provided in this preprint that any cells were involved in memory formation or memory retrieval, and since the "Neurons detect cognitive boundaries to structure episodic memories in humans" paper by the same lead author seemed to be identical in all the main features, there is no reason to think that such a study provided any evidence for a brain involvement in  memory formation or memory retrieval. 

Here is an excerpt from the press release touting the "Neurons detect cognitive boundaries to structure episodic memories in humans" paper, one that uses a faulty line of reasoning:

"The researchers next looked at memory retrieval and how this process relates to the firing of boundary and event cells. They theorized that the brain uses boundary peaks as markers for 'skimming' over past memories, much in the way the key photos are used to identify events. When the brain finds a firing pattern that looks familiar, it 'opens' that event.

Two different memory tests designed to study this theory were used. In the first, the participants were shown a series of still images and were asked whether they were from a scene in the film clips they just watched. Study participants were more likely to remember images that occurred soon after a hard or soft boundary, which is when a new 'photo' or 'event' would have been created.

The second test involved showing pairs of images taken from film clips that they had just watched. The participants were then asked which of the two images had appeared first. It turned out that they had a much harder time choosing the correct image if the two occurred on different sides of a hard boundary, possibly because they had been placed in different 'events.'

These findings provide a look into how the human brain creates, stores, and accesses memories." 

There is no justification for claiming that the experiments discussed in the quote above tell us anything about the brain. The experiments discussed in the quote above are psychology experiments involving only human mental performance, without any measurement of the brain.  What we see here is a trick that materialists frequently use:  use some experimental results that do not involve any brain reading or brain scanning or brain measurement, and then claim that such results tell you something about the brain. When experimental results merely tell us that humans perform in such-and-such a way, or merely tell us that minds perform in such-and-such a way, we have no warrant for saying that such results tell us that the brain is performing in such-and-such a way.

Not one single bit of robust evidence has been provided in the press release that any understanding has occurred as to how a brain could store or retrieve a memory, nor has any robust evidence been provided for the claim that brains store or retrieve memories.  All of the old reasons for rejecting such claims remain as strong as ever. 

In today's NIH press release we have an extremely untrue statement saying, "This work is transformative in how the researchers studied the way the human brain thinks." No, the study described is just another example of a dubious neuroscience research design like I have seen countless times before.  The study was funded by the NIH's Brain Initiative, and the PR people of that project have before often groundlessly used the word "transformative" for meager research results.  I quote from a previous post of mine discussing the lack of major progress made by the Brain Initiative:

"So far the BRAIN Initiative has been running for four or five years, and has accomplished nothing extremely noteworthy. Our understanding of the brain has not dramatically advanced during those four or five years, and all the old mysteries of mind and memory seem as mysterious as ever. At this 'Achievements' link there is a discussion of what the BRAIN Initiative has accomplished so far. At the top of the text is a big bold headline saying 'Transformative Advances,' but the BRAIN Initiative has produced no such transformative advances. Go beyond the flashy spin on the web site, the high-tech glitter, and the discussion of things in progress that haven't yet yielded much, and you have not a single major accomplishment relating to our understanding of the mind or memory. You see in this section a video entitled 'The BRAIN Initiative – the First Five Years.' The video fails to list a single accomplishment of the BRAIN Initiative. Apparently all this work to mechanistically explain the mind is pretty much a flop and a failure so far."
science bluffing

Below is an extremely relevant quote from the well-worth-reading paper "A Call for Greater Modesty in Psychology and Cognitive Neuroscience":

"A romantic view holds that science is built on different values, such as integrity and honesty, as well as different systems of operation that mandate a dispassionate, calculated and systematic pursuit of the 'truth'. However, such a view of science is naïve. The incentive structure of modern science is such that a 'simplify, then exaggerate' strategy has become dominant, even if only tacitly. To get published in leading journals, to be awarded grants and to be hired as a postdoc or faculty member, a system-wide bias for novelty, exaggeration and storytelling has emerged (Huber et al., 2019; Nosek et al., 2012). The prizing of novelty over quality represents one overarching driver in the construction of a research culture beset by the widespread use of questionable research practices and low levels of reproducibility (Chambers, 2017; Munafò et al., 2017; Nelson et al., 2018; Open Science Collaboration, 2015; Simmons et al., 2011). Indeed, although there have arguably been recent successes (Shiffrin et al., 2018), many aspects of modern psychology and brain science resemble a creative writing class as much as a systematic science of brain or mind."

Wednesday, March 2, 2022

No Solid Principle Justifies "Brains Make Minds" Thinking

In the posts on this blog, I have shown that the facts do not justify conventional claims that the brain is the source of the human mind, and claims that memories are stored in brains.  But could there be some kind of general principle that justifies thinking that brains make minds? Let's look at some possible principles, and see how well they stand up to scrutiny. 

One possible principle that could be evoked to try to justify "brains make minds" claims is a principle that physical effects must be explained by physical causes.  But this is not a defensible principle to justify "brains make minds" thinking. For one thing, mental effects such as thinking and understanding are not physical effects. Secondly,  it would seem that many physical effects are not caused by physical causes, but are instead caused by mental causes. If John becomes enraged at Joe, and punches Joe,  that is not a physical cause causing a physical effect, but a mental effect causing a physical effect. 

Another possible principle that could be evoked to try to justify "brains make minds" claims is a principle that mental effects must be explained by physical causes.  But this is not a defensible principle to justify "brains make minds" thinking. Consider this case: John becomes very sad because his true love Mary has become very sad.  This would seem to be a case of a mental effect being produced by another mental effect, and countless other examples of such a thing could be given.  It would not seem to be true that mental  effects must always be explained by physical causes. 

Another possible principle that could be evoked to try to justify "brains make minds" claims is a principle that scientists must never explain things by imagining invisible causes. A person could evoke this principle, and then say, "So rather than evoking some invisible cause for things mental, we must think of a visible cause: the brain."   But this is not a defensible principle to justify "brains make minds" thinking. The fact is that outside the world of neuroscience, scientists often evoke invisible causes to explain things.  

To explain the movements of bodies in the solar system, scientists evoke a universal law of gravitation.  Gravitation is very much an invisible cause. You can observe someone falling from gravity, but the force of gravitation is itself invisible.  To give another example, cosmologists (scientists who study the universe as a whole) habitually evoke two never-observed invisible things as explanations: dark matter and dark energy.  Such invisible and never-observed things are pillars in the explanation systems of cosmologists. So it simply isn't true that scientists must never explain things by imagining invisible causes.  If neuroscientists were to stop telling us that our brains make our minds, and were to start teaching that our minds arise from some mysterious "mind source" external to our bodies, this would be nothing very different from what cosmologists have been doing for decades, by appealing to invisible never-measured dark matter and dark energy. 

Another possible principle that could be evoked to try to justify "brains make minds" claims is the long-standing principle of Occam's Razor. This was originally stated as the principle that "entities should not be multiplied beyond necessity." One could appeal to the Occam's Razor principle when trying to justify a belief that brains make minds.  The reasoning might go like this:

"If we imagine that a brain is the cause of all of the mind and the storage spot of memory, that is simpler than imagining some soul is involved. For if you imagine a soul, you must also imagine some soul-giver or a soul source; and then you are postulating two things, not just one (a brain). But it is better to avoid postulating multiple  things if you can postulate only one thing. That's the long-standing principle of Occam's Razor." 

This argument is fallacious because it misstates Occam's Razor. According to the wikipedia.org article on Occam's Razor, the principle is inaccurately paraphrased as the principle that "the simplest explanation is usually the best one."  It is not a valid principle that we should always prefer the simpler explanation or the simplest explanation.  For example, if we imagine atoms as being hard indivisible particles as some ancient thinkers did, that is simpler than imagining atoms as usually being structured of multiple electrons, protons and neutrons. But in this case the more complicated explanation postulating more things is the correct one.

Occam's Razor is the principle that "entities should not be multiplied beyond necessity," and that "beyond necessity" part is a crucial part of the principle. Occam's Razor is the principle is that we should not assume additional causal factors unless we need to do so.  Below are some examples of correct and incorrect applications of Occam's Razor:

(1) A man was shot in the back when a rifle bullet tore into his flesh. Should we assume that two people pulled the trigger, or only one? You don't need two people to pull a trigger. So according to Occam's Razor, we should assume only one person pulled the trigger. 

(2) A man was killed when he was simultaneously shot in the back and also struck by an arrow that hit him in the front. We cannot evoke Occam's Razor to say there was only a single killer.  Here there is a necessity for postulating multiple causes. So it is quite consistent with Occam's Razor for us to assume there were two killers, one shooting from the front, and another shooting from the back. 

In the case of the mind and the brain, there are multiple necessities for assuming that the mind arises from something beyond the brain. They include the very short lifetime of proteins in the brain (about 1000 times shorter than the longest length of time old people can remember things), the rapid turnover and high instability of dendritic spines, the failure of scientists to ever find the slightest bit of stored memory information when examining neural tissue, the existence of good and sometimes above-average intelligence in some people whose brains had been almost entirely replaced by watery fluid (such as the hydrocephalus patients of John Lorber),  the lack of any indexing system or coordinate system or position notation system in the brain that might help to explain the wonder of instant memory recall, the good persistence of learned memories after surgical removal of half a brain to treat severe seizures,  the ability of many "savant" subjects (such as Kim Peek and Derek Paravicini) with severe brain damage to perform astounding wonders of memory recall, the fact of very vivid and lucid human experience and human memory formation in near-death experiences occurring after the electrical shutdown of the brain following cardiac arrest, and the complete lack of anything in the brain that can credibly explain a neural writing of complex learned information, a neural reading of complex learned information, or a neural instant retrieval of learned information.

So you cannot credibly evoke Occam's Razor to defend a belief that the mind is merely a product of the brain. Such a principle only applies to discourage cases when multiple causes are evoked "beyond necessity." But for the reasons above we seem to have many a necessity for postulating some cause of the mind beyond the brain.  

Another principle that could be evoked to try to justify "brains make minds" claims is the principle that every characteristic of something  must be explained in terms of the internal components of that thing.  Unfortunately, this principle is not a valid one, as the examples below show:

  • The motion behavior of planet Earth is not at all explained purely by some internal components of our planet.  The motion behavior of planet Earth through the solar system is caused mainly by things outside of planet Earth, such as the sun and the universal law of gravitation which causes the sun to have a gravitational influence on the motion of Earth. 
  • The temperature of planet Earth is not at all explained purely by some internal components of our planet. The temperature of our planet is mainly explained by an external influence: the heat that comes from the sun. 
  • A person's opinions and behavior are not at all explained purely by some internal components of his body. Such opinions and behavior are largely determined by factors (such as social influences) coming from beyond the person's body. 
It is simply not true that scientists always explain something purely by discussing internal parts of that thing. Scientists frequently maintain that the main explanation for something's characteristics are causal factors outside of that thing. 

Another principle that could be evoked to try to justify "brains make minds" claims is a "follow the consensus" principle. It could be argued that there is a scientific consensus that memories are stored in brains, and that the mind is merely the product of the brain; so we should believe that.  But there are problems with this argument.

"Consensus" is one of the most abused words in scientific discourse. Very confusingly defined in multiple ways, "consensus" is a word that some leading dictionaries define as an agreed opinion among a group of people. The first definition of "consensus" by the Merriam-Webster dictionary is "general agreement: unanimity."  We have no proof that there is any actual consensus among scientists that brains make minds or that brains store memories.  To the contrary, there are signs of serious doubts about such a claim.  In science literature these days it is often said that the problem of consciousness is an unsolved problem.  Elsewhere we read scientists flirting with panpsychism, an explanation for consciousness different from the idea that your brain produces consciousness.  

Let us consider a very interesting type of alleged consensus that I may call a "leader's new clothes" consensus.  Let us imagine a small company of about 20 employees that has a weekly employee meeting every Monday morning. On one Monday morning after all the employees have gathered in a conference room for the meeting, the company's leader comes in wearing flashy new clothes that are both very ugly and ridiculous-looking. Immediately the leader says, "I just paid $900 for this new outfit -- raise your hand if you think I look great in these clothes."   

Now if it is known that the leader is someone who can get angry and fire people for slight offenses, it is quite possible that all twenty of the employees might raise their hand in such a situation, even though not one single one of them believes that the leader looks good in his ugly new clothes.  In such a case the "public consensus" is 100% different from the private consensus. A secret ballot would have revealed the discrepancy. 

The point of this example is that appeals to some alleged public consensus are notoriously unreliable. So arguing from some alleged consensus of some group is a weak and unreliable form of reasoning.  The only way to get a reliable measure of the opinion of people on something is to do a secret ballot, and there virtually never occurs secret ballots of scientists asking about their opinions on scientific matters. We have no idea of whether the private beliefs of scientists differ very much from the public facade they present.  For example, we have no idea whether it is actually true that almost all  scientists think your mind is merely the product of your brain. It could easily be that 35% of them doubt such a doctrine, but speak differently in public for the sake of "fitting in," avoiding "heresy trouble" and seeming to conform to the perceived norms of their social group. 

The history of science shows many "consensus beliefs" that were later discarded. Less than a century ago, eugenics was once wildly popular in US colleges, but now stands in disrepute. It was once a reputed scientific consensus that homosexuality was a mental illness. Now anyone claiming that in a college would be condemned by his college superiors. To give another of many other examples I could cite, Semmelweis accumulated evidence that cases of a certain kind of deadly fever could be greatly reduced if physicians would simply wash their hands with an antiseptic solution, particularly after touching corpses. According to a wikipedia.org article on him, "Despite various publications of results where hand washing reduced mortality to below 1%, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community." Thousands died unnecessarily, because of the stubbornness of experts, who were too attached to long-standing myths and cherished fantasies such as the idea that physicians had special "healing hands" that would never be the source of death. The wikipedia article tells us, "At a conference of German physicians and natural scientists, most of the speakers rejected his doctrine, including the celebrated Rudolf Virchow, who was a scientist of the highest authority of his time."  Decades later, it was found that Semmelweis was correct, and his recommendations were finally adopted.   The wikipedia.org article notes, "The so-called Semmelweis reflex — a metaphor for a certain type of human behavior characterized by reflex-like rejection of new knowledge because it contradicts entrenched norms, beliefs, or paradigms — is named after Semmelweis, whose ideas were ridiculed and rejected by his contemporaries." 

More recently, in the year 2020 we were told countless times in the mainstream press that there was a scientific consensus that COVID-19 had arisen through a purely natural process, spreading from some animals that had the virus before humans. This alleged scientific consensus held for only about a year, until 2021, when many scientists started to confess that we don't know whether COVID-19 did or did not arise from a lab leak.  Below is from a Reuters article on a US government report on COVID-19 origins:

"The ODNI report said four U.S. spy agencies and a multi-agency body have 'low confidence' that COVID-19 originated with an infected animal or a related virus. But one agency said it had 'moderate confidence' that the first human COVID-19 infection most likely was the result of a laboratory accident, probably involving experimentation or animal handling by the Wuhan Institute of Virology."

Results such as this should shake our confidence in the idea that there is something compulsory about some alleged scientific consensus.  People tend to think that today's scientists have got things right because they have "state-of-the-art" equipment. Centuries from now (armed with vastly more sophisticated tools) scientists may look back on today's scientists the way today's scientists look back on 17th-century scientists, and think things like, "I can't believe way back then they were trying to figure out the mind by using those silly MRI machines."  Such scientists of the future may scorn today's community of neuroscientists, regarding it as a dysfunctional culture plagued by poor practices, overconfidence and hubris. 

To put things concisely, social proof is no proof, and "follow the herd" does not necessarily lead you to the truth. 



Another principle that could be evoked to try to justify "brains make minds" claims is a principle that scientists must only believe in things natural, so we can't believe in something supernatural (such as a soul that could explain the human mind).  The principle is far from a self-evident one. Given sufficient evidence for the supernatural, it would seem that scientists should believe in that, because their supreme rule should be "follow the evidence wherever it leads" rather than "only believe in things you think are natural."  Secondly, believing in some non-neural cause of the human mind does not necessarily require a belief in the supernatural. Humans could get something like a soul by means of some mysterious cosmic infrastructure that in some sense operates "naturally," rather than by one-by-one miraculous dispensations. So believing in a non-neural source of the human mind does not necessarily require you to believe in something miraculous or supernatural. 

In short, there is no sound general principle that can be evoked to justify thinking that the human mind is mainly a product of the brain, and that the brain is the storage place of memories.