Sunday, July 30, 2023

"Interbrain Synchrony" Smells Like Neuroscientist Pareidolia

In the year 2003 the United States launched an unprovoked invasion of Iraq. The US tried to get the United Nations Security Council to approve its invasion, but failed. Although ruled by a very bad man (Saddam Hussein), there was no justification in international law for the invasion. Far from threatening the United States at the time the invasion was launched, Iraq was allowing weapons inspectors a free run of the country in the months before the invasion. The US claimed that the invasion was justified because Iraq had weapons of mass destruction. No such weapons showed up in searches of the country following its invasion and occupation by the United States. 

To help get compliant journalists who would write favorable stories, the US military used techniques that encouraged what were known as embedded journalists. An embedded journalist was one who would long live with and follow particular military units that entered Iraq. This would encourage a kind of bonding and brotherhood that would be most likely to result in favorable stories with a title such as "The Heroes That I Rode With Into Baghdad" rather than stories questioning the United States' unprovoked invasion of Iraq. Have a man wear the uniform of the 1st Infantry Division and have him drive around with other soldiers in an armored personnel carrier of the 1st Infantry Division, and you will be likely to get someone reporting favorably on the activities of the 1st Infantry Division, particularly if such a division does its nastiest work when embedded journalists are not around. 

In the world of science journalism, we sometimes see the "embedded journalist" technique being used. To help get favorable news accounts of extremely dubious and poorly designed brain scan experiments, some neuroscientists seem to be encouraging or allowing science journalists to participate in such experiments, giving them experiences just like the subjects of such experiments.  So we have articles such as the recent Scientific American article "Brain Waves Synchronize When People Interact." The piece is an example of a science journalist writing a long article that fails to critically examine extremely weak and poorly supported claims.  Giving us no references to back up such claims, the author states this:

"An early, consistent finding is that when people converse or share an experience, their brain waves synchronize. Neurons in corresponding locations of the different brains fire at the same time, creating matching patterns, like dancers moving together."

There is no robust evidence for such claims. Neurons fire almost constantly in all parts of the brain. So any one looking for neurons firing at the same time in the corresponding parts of two different brains will always be able to find both parts firing at the same time. There is no good evidence for any synchronization resembling dancers moving together to perform the same choreography. 

The science journalist writing the story has acted as an "embedded journalist." She has undergone a medically unnecessary hour-long brain scan as part of her story about other people undergoing medically unnecessary brain scans as part of a science experiment.  She tells us that a neuroscientist "had enlisted" her in this "pioneering study." They have actually been doing similar studies for a long time. We hear a neuroscientist named Wheatley make this untrue, nonsensical claim:

"When we're talking to each other, we kind of create a single uberbrain that isn't reducible to the sum of its parts. Like oxygen and hydrogen combine to make water, it creates something special that isn't reducible to oxygen and hydrogen independently." 

No, when I talk to some stranger, that does not create some single brain, and it isn't anything whatsoever like oxygen and hydrogen combining to make water. Our science journalist lets this very silly statement get by without questioning it. She states the following about the aftermath of her brain scan:

"The researchers will compare the activity in my and Sid's brains, and the brains of all the other pairs in the study, second by second, voxel by voxel, over the course of our storytelling session, looking for signs of coherence..Such studies take time, but in a year or so, if all goes according to plan, they will publish their first results."

Gee, that sounds very much like the scientists will be eagerly seeking correlations, spending a whole lot of time slicing and dicing data, until some little bit of "coherence" can be squeezed out somewhere. It sounds like what would be going on in if scientists took 10,000 photos of clouds in New York and Shanghai, and then eagerly studied them, trying to look for some correlation between clouds in the same time over New York and Shanghai.   

The process of looking for similar brain wave patterns or brain activity patterns in two different people being scanned is called "hyperscanning" or a search for "hyper-connectivity." A 2013 scientific paper on hyperscanning ("On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note") found this:

"To conclude, existing measures of hyper-connectivity are biased and prone to detect coupling where none exists. In particular, spurious hyper-connections are likely to be found whenever any difference between experimental conditions induces systematic changes in the rhythmicity of the EEG. These spurious hyper-connections are not Type-1 errors and cannot be controlled statistically." 

A meta-analysis of hyperscanning studies reveal that more than 80 percent of them consisted of only two subjects. Two subjects is way too small a study group size for a robust result.  An informal rule of thumb is that for any correlation-seeking study a minimum of 15 subjects per study group is needed for anything that might be a reliable result (and in most cases the minimum study group size is even larger). Almost every hyperscanning study is very guilty of Questionable Research Practices because of the grossly inadequate study group sizes used. 

If our Scientific American journalist had not been "embedded" as part of a study, she might have been more likely to have critically covered the very dubious research involved. The journalist appeals to the faddish concept of "interbrain synchrony." A 2022 paper is entitled "Interbrain synchrony: on wavy ground."  Noting the fad-like nature of the research (by using the terms "in recent years" and "by storm"),  the paper states this:

"In recent years the study of dynamic, between-brain coupling mechanisms has taken social neuroscience by storm. In particular, interbrain synchrony (IBS) is a putative neural mechanism said to promote social interactions by enabling the functional integration of multiple brains. In this article, I argue that this research is beset with three pervasive and interrelated problems. First, the field lacks a widely accepted definition of IBS. Second, IBS wants for theories that can guide the design and interpretation of experiments. Third, a potpourri of tasks and empirical methods permits undue flexibility when testing the hypothesis. These factors synergistically undermine IBS [interbrain synchronicity] as a theoretical construct."

The paper is behind a paywall, but I can make a good guess of what the author is referring to by mentioning "a potpourri of tasks and empirical methods permits undue flexibility when testing the hypothesis." I would guess he's talking about the fact that "hyperscanning" studies or studies of "interbrain synchrony" are typically not pre-registered studies committing themselves to one particular way in which a synchrony of brains will be measured and analyzed. Free to use any of 1001 statistical measures while trying to show some brain synchrony, it is hardly surprising that some success will be reported.  When you are free to keep torturing the data until it confesses, you will get a few confessions. 

Yesterday at the Mad in America site we had a long article by University of Pennsylvania neuroscience professor Peter Sterling that presents an indictment of some poorly established, fad-centered work in neuroscience. Here are a few interesting quotes:

"Here I comment on the 2022 review, 'Causal Mapping of Human Brain Function' by Siddiqi et al., which appeared in Nature Reviews Neuroscience....To be clear: there is no neuroscience to suggest that any mental function would be improved by ablating or stimulating a particular structure in the prefrontal cortex or its associated subcortical regions. To the contrary, what we know of the intrinsic organization of the prefrontal cortex suggests that mucking with it is unlikely to help. Any suggestion to the contrary is simply wild speculation. The Review tries to justify its story by claiming efficacious results. But I have heard these claims before, and they never check out. They’re just a succession of Ponzi schemes, as here recounted.... A new generation of psychosurgeons has been emboldened. Moreover, they are supported by prominent journals that publish the same old single-case reports that claim good results, even when casual inspection indicates they are clearly not good....Why does Frontiers publish this atrocious nonsense? I believe that it represents deep corruption that is creeping back into neuroscience—as will be further noted below.... Here is the lead example cited for efficacy: '16 patients were randomized into a two-week crossover period…During the crossover phase, the mean difference in Y-BOCS score was 8.3 points (P = 0.004); that is, an improvement of 25%'. Yikes! 25% improvement for 2 weeks?? Nature Medicine??....The lead author acknowledges membership on advisory boards and speaker’s honoraria from Medtronic, Boston Scientific, and Abbott, with a long list of similar 'competing interests' for many of the other contributors. For Nature Medicine to publish such pitifully weak findings by authors with a financial stake in the industry strikes me as more evidence for a true 'crisis': rising collusion between Big Scientific Publishing and Big Pharma/Medical Electronics...Despite efforts to find 'biomarkers' for mental disturbance, they remain elusive (Insel, 2022). Two recent large-scale neuroimaging studies find no evidence to distinguish depressed from normal individuals or even to distinguish the two populations (reviewed in Sterling, 2022). This suggests that the Review’s claims to identify regions of interest in depression, including subcategories, and 'criminal behavior' and 'free will' are artifactual. Such claims emerge inevitably in small studies: the message from neuroscience needs to go out: distrust small studies....The neuroscience community has been powerfully corrupted by dreams of glory and gelt."

Sunday, July 23, 2023

Neuroscientists Don't Seem Like Paragons of Diligence

We cannot doubt that certain types of scientists are very hard-working people. Consider, for example, a type of scientist who specializes in some particular type of wild animal. Studying that animal may require a great deal of laborious field work outdoors, perhaps spending many days in a tent. Such scientists are probably pretty hard workers. But what about neuroscientists? Should we regard them in general as being extremely hard-working people? Maybe not. 

The considerations below refer only to neuroscientists who are not practicing neurologists or neurosurgeons (a group of people who probably work very hard treating sick patients). There are quite a few reasons for suspecting that neuroscientists may not be terribly hard-working people. One reason is that the PhD theses of neuroscientists tend to be some of the shortest PhD theses. On the page here, we see an interesting visual made by Marcus Beck using all of the PhD dissertations produced at the University of Minnesota. The visual shows that neuroscience PhD dissertations are some of the shortest of any academic subject, averaging only a little more than 100 pages. For comparison, the same visual shows that history PhD dissertations average more than 300 pages, and that English and sociology dissertations average about 200 pages. 

Another reason for suspecting that neuroscientists may not be working extremely hard is the continued failure of neuroscientists to follow sound experimental procedures. Again and again in neuroscientist studies we find the same old Questionable Research Practices. Acting in a sloppy and lazy manner, neuroscientists typically fail to devise a detailed research plan and commit themselves to following it, before they gather data. They gather data and may then fool around with dozens of different ways of analyzing the data until they get something they can report as "statistically significant." Such scientists have been told endless times that failing to commit themselves to testing a specific hypothesis in a  specific way results in unreliable research that is probably picking up mainly "noise" rather than something important; but they keep ignoring such warnings. 

Following sound research practices requires very hard-working people who act with discipline. A "let's just gather data and then wing it to write the paper" approach is the kind of approach that may be preferred by rather lazy people. Gathering data from hundreds of subjects requires lots of work. But neuroscientists are infamous for using tiny study group sizes in their experiments. The typical neuroscience experiment involves gathering data from fewer than 15 human or animal subjects, often less than 10. That does not require very much work. Referring to neuroscience brain scan studies, the 2020 paper here tells us that "96% of highly cited experimental fMRI studies had a single group of participants and these studies had median sample size of 12." 

It has often been pointed out to neuroscientists that they are typically using study group sizes way too small for reliable results. For example, the 2017 paper co-authored by the well-known statistician John P. A. Ioannidis ("Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature") stated that " because sample sizes have remained small" it was sadly true that "false report probability is likely to exceed 50% for the whole literature."  But there is no evidence that neuroscientists have substantially changed their ways to reduce such criticisms. It kind of reminds me of a teenager who keeps loafing on the couch one Saturday afternoon despite long being scolded by his parents to do something useful. 

There are other reasons for thinking today's typical neuroscientist simply isn't working terribly hard. One reason is related to the relatively low research output of today's neuroscientists. There is a rough method that can be used to calculate how productive scientists in a particular field are:

(1) Get an estimate of the total number of scientists of a particular type in the United States.

(2) Get an estimate of how many papers are published by scientists of that type in the United States.

(3) After getting a preliminary "papers published per scientist" estimate, remember to divide by the average number of authors per paper for papers published by scientists of that type.  

Let's try doing a rough calculation of this type. 

(1) It is estimated that there are about 25,000 neuroscientists in the United States.

(2) In the United States in 2020 there were published about 600,000 scientific papers, and no more than about 25,000 of these papers are neuroscience papers. That gives you only about one neuroscience paper published per neuroscientist in the United States. 

(3) In the field of neuroscience, there is a surprisingly high number of average authors per neuroscience paper. According to the paper "A Century Plus of Exponential Authorship Inflation in Neuroscience and Psychology," there has been an astonishing rise in the average number of authors per neuroscience paper.  The paper says, "The average authorship size per paper has grown exponentially in neuroscience and psychology, at a rate of 50% and 31% over the last decade, reaching a record 10.4 and 4.8 authors in 2021, respectively." That gives you a current average of about 10 authors per neuroscience paper.   

So how much research work is the average neuroscientist doing per year? It seems reasonable to estimate that the amount of work done by one of about 10 paper authors would be equal to about one tenth the work needed to write the paper alone. So given only about one neuroscience paper published per neuroscientist in the United States, and an average of about 10 authors per paper,  we are left with a very rough estimate that each year the average neuroscientist is doing only about one tenth of the research work needed to write a paper. Because the neuroscience paper (with an average of ten authors per paper) is about 30 pages long, we are left with the suspicion that neuroscientists do shockingly little work writing up scientific papers. Perhaps their published work is so little it amounts to only about three to ten pages per year of writing. But who knows;  my estimates here are very rough. I may note, however, that a large fraction of neuroscience papers these days consists of brain scan visuals or charts produced by software, neither of which require much writing activity. 

I may note that many neuroscientists and other scientists engage in the appalling dishonest practice of describing themselves as the authors of a certain number of papers, when they are mainly just co-authors of such papers, which typically had many authors. So if a neuroscientist  was always a co-author, and a co-author of 100 scientific papers that had about 1000 different authors, he may dishonestly describe him as "the author of 100 scientific papers." Then there is the issue that  many neuroscience papers having some relation to drugs or medical devices are largely written by corporate-paid ghostwriters, paid to tell some story that promotes a corporate agenda. Based on the information in this article, reporting that at least 25% of the New York Times nonfiction bestseller list is written by ghostwriters, we may assume that many of the books of neuroscientists are largely written by ghostwriters.  The page here tells us that "nearly all experts and celebrities use ghostwriters."

The 2017 paper "Effect size and statistical power in the rodent fear conditioning literature – A systematic review" inadvertently amounts to an astonishing portrait of neuroscientist laziness.  One of the most basic things that a good experimental scientist should do is something called a sample size calculation, to determine  the number of subjects needed in an experiment for the experiment to have a certain amount of statistical power such as 80% power.   The paper reviewed 410 neuroscience experiments, and found that "only one article reported a sample size calculation." The average sample size reported in Figure 3 of the paper was only about 10 animals per experiment. The paper reports that "our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments." 

neuroscientist laziness

Similarly, the paper here on brain scan studies tells us that only 3% of papers that it examined had calculations of statistical power. That sounds like some very bad laziness going on.  

In the journal Science we read a story entitled "Fake Science Papers Are Alarmingly Common"  saying the following:

"When neuropsychologist Bernard Sabel put his new fake-paper detector to work, he was 'shocked' by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%....His findings underscores what was widely suspected: Journals are awash in a rising tide of scientific manuscripts from paper mills -- secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship." 

Referring to "red-flagged fake publications" as RFPs, a paper by Sabel ("Fake Publications in Biomedical Science: Red-flagging Method Indicates Mass Production") and three other authors (including 2 PhDs) states this:

"The results show a rapid growth of RFPs [red-flagged fake publications] over time in neuroscience (13.4% to 33.7%) and a somewhat smaller and more recent increase in medicine (19.4% to 24%) (Fig. 2). A cause of the greater rise of neuroscience RFPs may be that fake experiments (biochemistry, in vitro and in vivo animal studies) in basic science are easier to generate because they do not require clinical trial ethics approval by regulatory authorities."

Later we read this:

"Study 4 tested our indicators in an even larger sample of randomly selected journals included in the Neuroscience Peer Review Consortium. It redflagged 366/3,500 (10.5%) potential fakes."

Doing accurate complex math can be hard work, but the mathematics in neuroscience papers is typically pretty simple compared to the very complex math of theoretical physics papers. Moreover, the math that appears in neuroscience papers is often wrong. A common type of calculation is a calculation of an effect size. Using the word "ubiquitous" to mean "everywhere," and using the word "inflated" to mean "mathematically incorrect," a scientific paper tells us, "In smaller samples, typical for brain-wide association studies (BWAS), irreproducible, inflated effect sizes were ubiquitous, no matter the method (univariate, multivariate)." A paper on fMRI neuroscience research tell us this:

"Almost every fMRI analysis involves thousands of simultaneous significance tests on discrete voxels in collected brain volumes. As a result, setting one’s P-value threshold to 0.05, as is typically done in the behavioral sciences, is sure to produce hundreds or thousands of false positives in every analysis."

There are fields that require constant studying to keep up with the latest advances. In the decades I worked as a computer programmer, I had to learn new technologies over and over again as the programming world moved (between 1990 to 2010) from command-line DOS boxes to object-oriented programming to Windows graphical user  interfaces to web-based programming, and from simple flat-file text data storage and spreadsheet data storage to relational databases and cloud data storage. But what changes have there been in neuroscience in the past 30 years? These guys are still trying to get by on fMRI scans and EEG readings that they've been using for more than 30 years. And there's been no very substantive advance in the theories of neuroscientists in decades. 

There are other reasons for suspecting that neuroscientists may tend to be people who don't work terribly hard. Again and again, neuroscientists act rather like people who had failed to diligently study brains in a comprehensive manner, and failed to do the work we would expect serious scholars of brains to have done. Again and again neuroscientists repeat "old wives tales" of neuroscience communities that are inconsistent with facts about brains that scientists have learned. An example is their repeated recitation of a claim that brain signals travel at 100 meters per second, instead of telling us the facts that imply that brain signals probably travel at an average speed more than 1000 times slower than this while traveling through the cortex. One of the most important things ever discovered in neuroscience is that brain signals travel across synapses very unreliably, with a reliability of only about 50% or less. Neuroscientists fail to realize the enormous implications of this fact, and typically speak just as if they had never learned such a fact.  When they speculate about how a brain could store memories, making vague hand-waving claims about synapses, neuroscientists sound like people who had not well-studied the high instability and high molecular turnover of synapses and the dendritic spines they are attached to. Neuroscientists typically write like people who had failed to seriously study the most important cases of high mental performance despite heavy brain damage (discussed here and here and here).  When they write about human memory performance, neuroscientists typically write like people who had not well-studied the topic, and who didn't know anything about exceptional human memory performance (discussed here and here). Neuroscientists are always dogmatically lecturing us about what they think are the causes of mental phenomena, but the great majority of neuroscientists show no signs of being very diligent scholars of human mental phenomena in all its diversity.  Again and again in their literature I read statements about human minds and human mental performance that simply are not true, such as the very silly claim (contrary to all human experience) that it takes quite a few minutes for someone to create a new memory.  The topic of human mental experiences and human mental abilities and mental states is a topic of oceanic depth. I repeatedly read writings from neuroscientists who sound like they have merely waded around at the edges of such an ocean, rather than very often making very long and very deep dives into it. 

I wonder: is it a kind of "dirty little secret" of modern science that after getting your master's degree you can get to be a neuroscientist PhD without doing terribly much work, and that you can then kind of coast your way or cruise your way through very much or most of your career, typically not exerting yourself all that much? Is the field of neuroscience these days somewhat like a haven for sloppy workers or semi-slackers who get used to just "phoning it in"?

But taking care of patients is very hard work, so none of the above applies to neuroscientists heavily involved in clinical care. 

Sunday, July 16, 2023

The 4 Nonsensical Parts of the Tale Your Brain Instantly Retrieved an Answer Learned Long Ago

There are some "show stopper" difficulties in the idea that a human uses his brain to answer a question that arises in his mind. But an interesting question is: is such an idea only partially nonsensical, or is it almost entirely nonsensical? I will argue now that the story that you use your brain to instantly retrieve an answer you learned long ago and stored in your brain is pretty much entirely nonsensical. What I mean by this is that when we break down the idea into a series of parts, we find that each of the parts is unworkable or incoherent or unbelievable. 

Let's analyze the tale of you using your brain to think of a question, and then instantly recalling its answer (learned long ago) by using your brain, looking at each of the component parts of such a tale.

Part 1: The Nonsensical Idea That a Question Arises in Your Brain  

There is nothing illogical about the idea that a question arises in a mind. A question is just an example of an idea, and humans can have ideas. So suppose I ask myself, "How was it that I learned how to swim?" That's equivalent to the idea that I somehow learned to swim, and that I might be able to recall how that happened.  The problem comes when we try to get a neural account of this question arising in my mind.  Ideas cannot be explained by brain activity. No neuroscientist has ever told a coherent credible tale of an idea arising in your brain. There is no evidence that brains change their state when someone has an idea. 

You can imagine some computer that processes data, and has a particular screen that is labeled "my current thought." The computer might retrieve different words, and display them on that screen. So at a particular moment you might see the screen showing, "Where did Napoleon Bonaparte die?" We could give some explanation of how such a question came to appear on such a screen. But in the brain there is nothing like this physical arrangement.  There is no little special area of the brain devoted to holding a person's current thought. 

Can we imagine, perhaps, some random part of the brain where there is written some words that correspond to the question in your mind? There is no evidence that words or letters ever physically appear in the brain. You cannot scan any part of the brain and see any words or letters. So the idea of a question arising in your brain (as opposed to your mind) is nonsensical. There is no credible neural account we can give of a question arising in the brain.

Part 2: The Nonsensical Idea That the Answer You Learned Long Ago Was Converted to a Neural State or Synapse State

To continue our examination of the idea of the brain retrieving an answer that you learned long ago, we must imagine that long ago you learned some answer that you converted into some brain state or synapse state. For example, if you are to give a neural account of you recalling the answer to the question of how many states there are in the United States, you must imagine at some time you were told that the United States has 50 states, and that you stored such a fact as a brain state. But such an idea is nonsensical. We can imagine no neural states or synapse states that could store as simple a sentence as the sentence "The United States has 50 states." 

Nowhere in a brain or in synapses is there a writing area in which words can be written in any form resembling printed words or written words. And there's nothing in the brain corresponding to a pencil or pen that would write at a particular position. On the hard drive of a computer like existed in the year 2000, information writing occurred because the disk spun around, and there was a read/write head that could move to a particular spot on the disk to read or write. There's nothing like that in the brain. In the architecture of the brain, there's nothing like any cursor or current writing position, and there's nothing resembling a read/write head. So how could information ever be written in some particular spot of the brain?

You don't get around this by imagining that the brain stores some learned information by splitting it up into different parts, and storing the different parts in different places of the brain. That just makes the problem worse, creating new problems of how such a division could take place, why those particular scattered places would be chosen for storage, and how information so scattered could be reassembled instantly when you instantly remember something. Similarly, if I can't explain how some dog could write what happened to it today by writing in one of the books in my book case, I don't make that problem easier by speculating that maybe the dog split his account up so it was stored in not one of my books but five of my books. 

Then there's the problem that proteins are the work horses of the body, but a language such as English and the alphabet it uses are less than 3000 years old; and there has been no change in brain structure or brain proteins during that time. So how could your brain ever write something in some language that didn't exist when the latest proteins of the brain appeared? 

Humans remember skills and emotions and facts and images in a thousand different forms. We can imagine no translation scheme by which all such things could be converted to brain states or synapse states. Can we get around this by supposing, for example, that your answer to the question of how many states are in the United States is stored as the original sense impressions you had when you learned that? No, we can't. I remember endless thousands of things that I have learned without remembering the original sense impressions I had when learning such things. For example, if you ask me how many states are in the United States, I simply remember "50," but do not at all remember myself as some tiny child in a school room hearing the teacher teach me that.  

Although we hear neuroscientists talk longingly about one day discovering a neural code by which learned information could be converted to neural states, no such thing has been found. There isn't even a substantive detailed theory as to how such a thing could happen. A neuroscientist cannot even give you a detailed speculative explanation as to how exactly as simple a phrase as "my dog has fleas" could be stored as neural states. Any speculations you may read on this topic tend to be jargon-decorated hand waving. In short, the idea that the answer that you learned long ago was stored as a neural state or a synapse state is nonsensical. 

Part 3: The Nonsensical Idea That the Answer You Learned Long Ago Persisted for Many Years as Some Stored Knowledge in a Brain

The brain is an area of heavy molecular turnover. The proteins that make up the synapses of the brain have average lifetimes of less than two weeks. Synapses are so small that it is hard to directly track their lifetimes. But we know that such synapses are attached to tiny structures in the brain called dendritic spines. And we know that such dendritic spines don't last very long. 

Dendritic spines last no more than about a month in the hippocampus, and less than two years in the cortex. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the hippocampus have a turnover of about 40% each 4 days. This 2002 study found that a subgroup of dendritic spines in the cortex of mice brains (the more long-lasting subgroup) have a half-life of only 120 days. A paper on dendritic spines in the neocortex says, "Spines that appear and persist are rare." While a 2009 paper tried to insinuate a link between dendritic spines and memory, its data showed how unstable dendritic spines are.  Speaking of dendritic spines in the cortex, the paper found that "most daily formed spines have an average lifetime of ~1.5 days and a small fraction have an average lifetime of ~1–2 months," and told us that the fraction of dendritic spines lasting for more than a year was less than 1 percent. A 2018 paper has a graph showing a 5-day "survival fraction" of only about 30% for dendritic spines in the cortex.  A 2014 paper found that only 3% of new spines in the cortex persist for more than 22 days. Speaking of dendritic spines, a 2007 paper says, "Most spines that appear in adult animals are transient, and the addition of stable spines and synapses is rare." A 2016 paper found a dendritic spine turnover rate in the neocortex of 4% every 2 days. A 2018 paper found only about 30% of new and existing dendritic spines in the cortex remaining after 16 days (Figure 4 in the paper).

The main theory of memory storage in the brain involves claims that memories are stored in synapses. But what we know about synapses and dendritic spines and their instability and heavy turnover is inconsistent with the fact that memories can persist for many years. 

 Part 4: The Nonsensical Idea That You Instantly Found the Answer You Learned Long Ago That Was Stored in Your Brain

Now we come to what is perhaps the most nonsensical part of all: the idea that your brain instantly found some answer that was learned years ago.  Humans make things that allow the instant retrieval of information: things such as books and computers. We know the kind of underlying features that such things have that allow us to instantly retrieve information using such things. The four pillars of instant information retrieval using physical devices are addressing, sorting, indexing, and focusing. 

The four pillars of a physical instant retrieval of knowledge

For example, consider the simple case of a history book. You can find an answer about some historical topic instantly by using the index of the book to look up some topic, and then going to the appropriate page.  This requires addressing (that pages have particular page numbers), sorting (that the pages be sorted in numerical order), indexing (that there be an index), and focusing (the ability of your eyes to focus on the index entry or on a part of a page, and the fact that the book can only be opened to two pages at a time).  With an encyclopedia of alphabetically sorted topics, you can retrieve information instantly without either the indexing or the addressing. But with such a book you still need at least sorting and focusing (the focusing of your eyes on a particular page, and the design of the book which gives a kind of focusing by restricting the book to display two pages at a time). 

Similar things occur when you retrieve information using a computer. When you look up something using a computer, your computer is internally using addressing and indexing, or accessing some database that uses addressing and indexing. And your computer or I-Pad or smartphone has a focusing mechanism: a screen that limits things so that only a particular rectangle of information is displayed  Also, the information retrieval requires your eyes to be focusing on the screen. 

So I have reviewed some of the things required for instant information retrieval to physically occur. Does the brain have any of these things? It does not. There is no addressing in a brain. Particular neurons do not have neuron numbers or neuron addresses, and particular synapses do not have synapse numbers or synapse addresses.  There is also no physical sorting in the brain. The physical structure of the brain makes a sorting of synapses or neurons impossible. Since each neuron has many connections which anchor it to a particular place, and most synapses are anchored in various ways such as being anchored to dendritic spines, there can be no physical sorting in the brain.  Consequently there can be no indexing in a brain, since indexing requires sorting and addressing. 

Is there at the very least some kind of physical focus mechanism in the brain? There is no sign of any such thing. We can speculate about how brains of extraterrestrial creatures might have some kind of focus mechanism. We can imagine some moving read unit that might move around from one part of the creature's brain to another; and when such a read unit was located at one particular part of the brain, there would be a focus on that part.  But no such thing exists in the human brain. Physically the brain shows zero signs of ever focusing on one particular part of the brain. 

A web page discusses the physical changes that occur when an eye focuses:

"Your curved cornea bends the light into your eye. Your lens changes shape to bring things into focus. When you look at things that are far away, muscles in your eye relax and your lens looks like a slim disc. When you look at things that are close, muscles in your eye contract and make your lens thicker.

There is no sign of any physical changes occurring in the brain  when you remember some thing you learned long ago. There is no physical sign of any focusing occurring in the brain. Don't be fooled by brain scan visuals that use "lying with colors" visuals designed to make you think that some part of the brain "lights up" during some particular activity. Such visuals misleadingly depict changes in brain blood flow no greater than about 1 part in 200, and are not any evidence of a brain focusing on some particular part to get data, but are mere signs of tiny random fluctuations. Using the same type of techniques, you could make the same type of visuals showing changes in blood flow in your liver while you were remembering something. 

We know the kind of things that enable instant information retrieval in objects humans make: things such as addressing, sorting, indexing and focusing. No such things occur in the brain when you remember some answer you learned long ago. So how could a brain ever instantly retrieve some fact you learned many years ago? It could not. That would be like instantly finding just the right palm-sized index card in an Olympic-sized swimming pool filled to the top with index cards.  

I have examined four different parts involved in the tale that you use your brain to retrieve some answer you learned long ago and stored in your brain. All of the parts are nonsensical. Similarly, if we were to analyze the different parts involved in a tale of someone holding a big soap bubble that lifts him up and carries him to live a happy life on  Venus, we would find that each of the different parts of the story is nonsensical. 

There is a much better account that makes sense: the account that you have a soul that is equipped with a nonphysical repository of knowledge and memory. You can therefore instantly retrieve things you have learned, because there is no physical retrieval involved. And you can remember things that you learned very long ago, because the repository does not consist of short-lived structures subject to heavy molecular turnover.  And you can instantly form new memories and learn new things, because you don't have to wait minutes or hours for protein synthesis to occur each time you learn something.  And should it be that your brain shuts down temporarily during cardiac arrest, you may (like very many people who had the same thing happen) be able to tell a very vivid recollection of a near-death experience, because the electrical inactivity of your brain will not stop you from forming new memories. And if this occurs you may well remember observing your body from far away from it, like so many others having near-death experiences and out-of-body experiences, because your brain never was the source of your self. 

Sunday, July 9, 2023

The Two Years of MSNBC's Mueller Hype Was Like the Science Media's Poor Journalism

 During most of 2020 and 2021, the COVID-19 pandemic and the 2020 US election were the two big stories in the US press.  But in 2017 and 2018 in some places in the media it was like there was a different story that had been crowned as the Big Story: the Mueller Investigation. We can look back now at the coverage of that investigation, and realize that much of the coverage was a bad example of junk journalism. 

Triggered by the firing of FBI director James Comey by Donald Trump (then president of the United States), the Mueller Investigation was launched in May 2017 to investigate claims of Russian interference into the 2016 United States election. Before it completed  in March of 2019, the investigation was a secret government inquiry. But this did not stop television networks such as MSNBC from reporting on it almost every day for two years.  The stories about the investigation that we got on MSNBC shows such as the Rachel Maddow show were in general a case of junk journalism. 

For almost two years on her primetime show, Rachel Maddow had a steady stream of talking heads speculating about what the Mueller Investigation was doing and what its end result would be. It was a very long stream of guesswork and kind of wishful longing in which people seemed to fondly imagine how the Mueller Investigation would end up spelling Trump's doom. Guest after guest on Maddow's show would leave hints suggesting that once the investigation was finished, it would spell political doom for the US president at that time, Donald Trump. During this period between May 2017 and March 2019, there was little in the way of substantive facts to back up such speculation, because the Mueller Investigation had not yet released its report. For almost two years MSNBC led us to believe that the Mueller Investigation was pretty much the most important story of its time. A Guardian article says this:

"The Mueller investigation was covered more on MSBNC than any other television network, and was mentioned virtually every day in 2018. No twist was too minuscule or outlandish for Maddow; every night, seemingly, brought another nail in the coffin of the soon-to-be-dead Trump presidency...In more sober times, this brand of analysis would barely cut it on a far-right podcast. In the Trump era, it was ratings gold."

The Mueller Investigation resulted in the indictment of 34 individuals,  eight of whom were convicted or pleaded guilty. None of the charges were charges of colluding with Russians to alter the 2016 election. Donald Trump was not indicted. Finally in May 2019 the Mueller Investigation released its report. The results were anticlimactic. The report said that substantial Russian interference in the 2016 US election had occurred, but such a thing was already known before the Mueller Investigation began in 2017. 

BBC article summarized the rather dull and muted findings of the Mueller Investigation:

"Mr Mueller's 448-page report said it had not established that the Trump campaign criminally conspired with Russia to influence the [2016] election. However, it did detail 10 instances where Mr Trump had possibly attempted to impede the investigation and stated the report did not exonerate Mr Trump. Mr Mueller reiterated that in a rare statement following the end of the inquiry and said legal guidelines prevent the indictment of a sitting president. He said if his team had had confidence that Mr Trump 'clearly did not commit a crime, we would have said so.' "

The Mueller Investigation seemed to have no great effect on the opinions of US voters, and there was little change in the polls.  By April 2020 people began focusing on the COVID-19 pandemic, and the Mueller Investigation was almost forgotten by the public. After losing the 2020 US presidential election Donald Trump now finds himself indicted on other charges and in serious legal trouble for various other reasons; but the idea that the Mueller Investigation would be his downfall was not at all correct. 

There is a strong resemblance between two years of MSNBC's "Mueller mania" junk journalism and the junk journalism that continues to pervade the reporting of science news. Some of the similarities are as follows:

(1) Just as MSNBC gave us two years of softball-question interviews with speculating authorities who did not really know what they were talking about when they droned on about the Mueller Investigation (because the investigation was secret), science journalists give us year after year of softball-question interviews with speculating scientists who do not know what they are talking about when they speculate on topics such as dark energy and dark matter (which have never been observed), human memory (which is not credibly explained by anything known about the brain),  and macroevolution (which has never been observed by humans, unlike small-scale microevolution that has been observed). 

(2) Just as MSNBC hosts such as Rachel Maddow for two years tried to give us the impression that they had some deep insight into some matter they did not understand (the secret Mueller Investigation), science journalists year after year write articles trying to give us the impression that have some great insight into baffling mysteries of nature that are vastly beyond their understanding (such as the mystery of how an enormously organized human body is able to form from a speck-sized zygote, the mystery of how any human is able to think, imagine or remember, or the mystery of how the human species originated). 

(3)  Just as the great majority of the Mueller Investigation talk we got for two years (2017 and 2018) on MSNBC shows such as the Rachel Maddow Show was pretty worthless hype and speculation, a large fraction of the content of today's science news is almost-worthless hype and speculation, including lots of poor and uncritical hype-heavy discussion of badly designed science research guilty of Questionable Research Practices, in which weak and unimportant papers are hailed as giant breakthroughs.

(4) Just as the endless Mueller Investigation talk we got for two years (2017 and 2018) on MSNBC was ideology-heavy content designed to influence worldviews and voting behavior, a great deal of the content in our science news feeds and science web sites is ideology-heavy content designed to influence worldviews, and make you think a particular way about yourself and the universe you live in: that your species is an accident of nature that arose in an accidental universe.  

(5) Just as the endless Mueller Investigation talk we got for two years (2017 and 2018) on MSNBC distracted us from things we should have been paying attention to (such as the risk of a global pandemic like the one starting in 2020, the danger of hazardous gene-splicing in biomedical labs with insufficient safeguards, and the danger of inflation which rather suddenly got so bad around 2021), the speculation and clickbait journalism that litters science news sites have been a very bad distraction reducing the amount of time people spend on pondering the really important developments related to science and nature. Such developments include the gradual discovery that we live in a very precisely fine-tuned universe that had to be just right in dozens of ways for us to exist; the accelerating discovery of stratospheric levels of organization. coordinated complexity and purposeful fine-tuning in the human body and in the bodies of all mammals; the worsening failure of explanations such as Darwinism and genetics to account for such wonders of nature; the failure of neuroscientists to provide robust evidence backing up their  claims of a neural basis of mind and memory; increasing evidence suggesting brains have many physical shortfalls ruling out claims that they explain mind and memory; a replication crisis shaking the public's confidence in various types of experimental science such as neuroscience; and the growing evidence for psychic phenomena such as out-of-body experiences that are inconsistent with the claims about minds and brains typically made by today's scientists.  

bad detective
See here for a long discussion of the six clues shown above

Sunday, July 2, 2023

Pharmaceutical Company Entanglements Cloud the Objectivity of Neuroscientists and Psychiatrists

One of the biggest mistakes that people make considering the conclusions of an expert is to imagine the expert as some objective, disinterested person weighing truth like some impartial judge or impartial jury.  There are all kinds of reasons why an expert may have "skin in the game" so that he is more like a juror who was bribed to reach a particular conclusion. The largest reason why experts lack impartiality is that they tend to be members of expert communities in which particular opinions are expected because of belief and behavior traditions and speech customs in such communities. 

Let's imagine the difference between two experts, Joe and Jack. Existing outside of a community and using his own savings for funding, Joe spends years studying some subject, enough time to become an expert on that topic. But Joe gets no paycheck related to his activity, and no funding from any group tending to encourage him to think in some particular way. Now Joe will tend to reach opinions on the area of his expertise that are impartial and unbiased.  Not dependent on any kind of funding, Joe knows that he can state whatever opinions he wants on his topic of expertise without fear of a negative financial impact. Joe knows he will never face some kind of performance review or committee hearing in which he will be judged by how well he conforms to the belief customs of some community of experts. In this regard I bear some resemblance to Joe, as I have never taken a cent from anyone except for government agencies (for example, things like tax refunds) and employers that paid me long ago for work such as computer programming work. 

Conversely, Jack becomes an expert in a way that leaves him very much an "organization man." Making a big financial investment in graduate school tuition, Jack studies to be a neuroscientist. His progress in getting his Master's Degree and PhD will depend on how well he conforms with the belief customs of neuroscientists who are teaching him. He can't write a PhD thesis defying the dogmas of such people, unless he wants to take a huge gamble. Then, after getting his PhD, Jack faces a long climb up a career ladder stretching from newly minted PhD to postdoctoral research associate to lecturer/assistant professor to reader/associate professor to full professor. His progress up each step will depend on his conformity to the belief customs and speech traditions of his fellow neuroscientists. At each step there are academic committees who reward conformity and punish nonconformity. So how can Jack possibly think of himself as some kind of unbiased judge on matters such as the capabilities of the human brain?  Throughout his ivory-tower odyssey, Jack will be more like a bribed juror than an impartial judge, because Jack will have more and more of a vested interest in conforming to the speech customs of his peers and superiors. 

There's another way in which the objectivity of neuroscientists is tainted: the fact that they need to get research funding that is abundantly available to neuroscientists conforming to neuroscientist speech customs, but scarcely available to those defying such customs.  Where does most neuroscience funding come from? In the paper "Financial Anatomy of Neuroscience Research,"  we read that "pharmaceutical industry and the largest biotechnology and medical device firms accounted for 58% of total funding." What kind of funding would such players fund? Only research based on assumptions that our brains are the source of our minds and the storage place of our memories.  

As for psychiatrists, they are up-to-their necks in pharmaceutical company entanglements. Pharmaceutical companies bombard psychiatrists with literature claiming that certain categories of mental distress can be treated with particular pills. The psychiatrist constantly given such claims will have a temptation to (a) categorize patients in some category corresponding to an available pill, and (b) prescribe such a pill. Often this may involve buying into some underlying assumption that is dubious or controversial. 

Consider the case of major depression. Strongly encouraged by pharmaceutical companies, psychiatrists and neuroscientists for decades pushed a very dubious claim that depression was caused by a brain chemistry imbalance such as a serotonin shortage.  There was never any very good evidence for such a claim. A study analyzing more than 1800 patients who had been brain-scanned stated, "We provide a large-scale, multimodal analysis of univariate biological differences between MDD [major depressive disorder] patients and controls and show that even under near-ideal conditions and for maximum biological differences, deviations are remarkably small, and similarity dominates." 

An article by Bruce E. Levine cites some reasons for thinking the pharmaceutical companies' "yes-men" psychiatrists have failed:

"In 2011, Thomas Insel, director of the National Institute of Mental Health (NIMH) from 2002-2015, acknowledged: 'Whatever we’ve been doing for five de­cades, it ain’t working. When I look at the numbers—the number of sui­cides, the number of disabilities, the mortality data—it’s abysmal, and it’s not getting any better.' In 2017, Insel told Wired: 'I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness.' In 2021, New York Times reporter Benedict Carey, after covering psychiatry for twenty years, concluded that psychiatry had done 'little to improve the lives of the millions of people living with persistent mental distress. Almost every measure of our collective mental health—rates of suicide, anxiety, depression, addiction deaths, psychiatric prescription use—went the wrong direc­tion, even as access to services expanded greatly' ... In 2022, CBS reported: “Depression is Not Caused by Low Levels of Serotonin, New Study Suggests.” Receiving widespread attention in the mainstream media was the July 2022 research review article 'The Serotonin Theory of Depression: A Systematic Umbrella Review of the Evidence,' published in the journal Molecular Psychiatry. In it, Joanna Moncrieff, co-chairperson of the Critical Psychiatry Network, and her co-researchers examined hundreds of different types of studies that attempted to detect a relationship between depression and serotonin, and concluded that there is no evidence of a link between low levels of serotonin and depression, stating: 'We suggest it is time to acknowledge that the serotonin theory of depression is not empirically substantiated.' "

In a recent paper a neuroscientist seems to confess the lack of evidence for assumptions that mental illness is caused by brain abnormalities, stating this:

"Yet, despite three decades of intense neuroimaging research, we still lack a neurobiological account for any psychiatric condition. Likewise, functional neuroimaging plays no role in clinical decision making." 

Dubious claims of a neurological cause for problems such as depression and anxiety divert people from focusing on the life history causes and social injustice causes and social influence causes of such problems, such as people being poorly treated by others now and in their past,  people being homeless or in poverty largely because of unfair social structures and laws that benefit the rich, people living under threats such as nuclear war, and people living in a society in which materialist professors seem hell-bent on depicting humans as mere accidents of nature or mere epiphenomena of brain chemistry rather than souls who are meaningful participants in some grand divine plan. Acting against all of the evidence, which suggests in a very loud voice that we are physically enormously purposeful arrangements of matter in a purposeful fine-tuned universe, with minds that cannot be explained by such arrangements of matter, our materialist professors act as if they were trying as hard as they could to depict us in the most depressing way, as meaningless accidental animals; and then they think to themselves,  "Why are so many people so depressed? It must be their brain chemistry." 

bad psychiatry advice

Many complain about the arbitrary categories of the "Bible of psychiatry," the DSM, a text that displays a great number of cultural conventions that seem to lack scientific objectivity. Complaining that there is no objective physical test for almost all psychiatric diagnoses (something comparable to a COVID-19 test), the author of a book about misdiagnoses states this:

"DSM diagnoses, they're lists of symptoms created by mental health professionals sitting around a table. They're based on opinions and theories, not hard data, and published in a book. That's all they are. The real danger is that we believe they're something other than what they are."  

Who are the experts who create and update this DSM "diagnosis bible" of psychiatrists? Experts often under heavy influence by pharmaceutical companies. The DSM often tells a dubious tale that is music to the ears of some pharmaceutical company executives, who cross their fingers while wishing that the experts will keep saying that Condition X is objective reality, when the company makes Pill Y claimed to help with Condition X.  Although it does not recommend specific medications, the DSM manual ends up indirectly being a goldmine for the pharmaceutical companies, who return the favor by shoveling money towards the writers of the DSM manual, in forms such as research grants. The pharmaceutical companies profit because they make drugs they claim are treatment for specific conditions listed in the DSM. 

The DSM manual often contains diagnoses that seem like arbitrary social constructs of a psychiatrist belief community or committee, such as describing an alleged disorder called "Social (Pragmatic) Communication Disorder," which the manual characterizes by using vague, wooly terms such as "deficits in using communications for social purposes" and "difficulties following rules for conversation and storytelling, such as taking turns in conversation." On many pages the DSM manual seems to end up medicalizing and pathologizing nonconformity, and on such pages the DSM manual sounds more like something written by some Ministry of Social Conformity rather than something inspired by hard science.