The Mad in America site (www.madinamerica.com) is not a philosophy of mind site, but a site dealing with the shortfalls of biological psychiatry, a psychiatric approach based on the idea that mental illnesses are mostly caused by brain states (as opposed to a person's life history and living conditions). At the Mad in America web site there are often well-written and scholarly articles that help to debunk some of the claims of "brains make minds" claimants. An example was a recent article by Peter C. Gøtzsche, MD. Near the beginning he makes this statement: "Despite 15 years of intense studying, I have been unable to find any important contribution of biological psychiatry to our understanding of the causes of psychiatric disorders and how they should best be treated." Referring to Attention Deficit Hyperactivity Disorder, the doctor says, "The fact is that ADHD is a social construct and that no reliable studies have shown any biological origin for this construct, or that the brains of people with this diagnosis are different to the brains of other people." The doctor states this:
"Another textbook noted that the findings obtained with structural and functional scans were inconsistent and varying, especially those obtained with functional MR scans that measure small changes in blood flow to various areas of the brain while the patient is given various tasks. This whole area is a mess of highly unreliable research. A 2009 meta-analysis found that the false positive rate in neuroimaging studies is between 10% and 40%. And a 2012 report written for the American Psychiatric Association about neuroimaging biomarkers concluded that 'no studies have been published in journals indexed by the National Library of Medicine examining the predictive ability of neuroimaging for psychiatric disorders for either adults or children.' "
The doctor then tells us this about a 2012 analysis of brain imaging studies:
"Carp found that many of the studies didn’t report on critical methodological details about experimental design, data acquisition, or analysis, and many studies were underpowered. Data collection and analysis methods were highly flexible. The researchers had used 32 unique software packages, and there were nearly as many unique analysis pipelines as there were studies. Carp concluded that because the rate of false positive results increases with the flexibility of the design, the field of functional neuroimaging may be particularly vulnerable to false positives. Fewer than half of the studies reported the number of people rejected from analysis and the reasons for rejection, and the median sample size per group was only 15, which generates an enormous risk of selective publication of those results that happened to agree with the investigators’ prejudices. The order of processing procedures also permits substantial flexibility in the analyses. Replication is essential for the trustworthiness of science, and scientific papers must report experimental procedures in sufficient detail that allows independent investigators to reproduce the experiments. This is far from the case in imaging studies."
The doctor tells us that the same Carp analyzed a single brain scanning study, and found that using all of the different analysis pathways in the literature, that some "6,912 unique analysis pipelines" could be applied to the data, with almost as many different possible results arising from such analysis differences. That's pretty much a situation that can be described as "whatever you want to see, you can find," just by doing trial and error with different analysis pipelines until you see what you want. You can describe the situation with a rule of "if you torture the data long enough, it will confess to anything."
The doctor tells us this:
"In 2022, other researchers used three of the largest neuroimaging datasets available including a total of around 50,000 individuals to quantify brain-wide association studies’ (BWAS) effect sizes and reproducibility as a function of sample size.76 The median sample size was only 23 people. The researchers found that BWAS reproducibility requires samples with thousands of people. As a commentator wrote, the study showed that almost every person diagnosed with depression will have the same brain connectivity as someone without the diagnosis, and almost every person diagnosed with ADHD will have the same brain volume as someone without ADHD.77 Yet, in the small studies, correlations were almost always greater than 0.2 and sometimes much larger, which, as the researchers wrote, should not be believed."
To help understand what is going on, imagine some scientist who happens to believe in astrology, and who believes that wealth is associated with month of birth. Using a large sample size such as 1000 subjects, no significant correlation will be found between these things. But it will be easy to report some small correlation if the researcher uses some small sample size such as only 15 subjects, and if he doesn't pre-register a particular specific hypothesis (such as the hypothesis that people born in June tend to end up wealthier), and if the researcher is free to not publish any result not matching what he hopes to find (something called the file drawer effect). Free to look for either slightly greater wealth or slightly less wealth for people born in any of 12 months of the year, and using only a small sample size such as 15 subjects, there will be a good chance that a small correlation will be found. Such a study (finding what is only false alarm noise) resembles the typical brain scanning study using only a small number of subjects. But for the scientist doing such a brain scan study, things are even easier. Instead of having only 12 months of the year to test, looking for some spurious correlation, such a scientist has hundreds of tiny brain regions he can check, until a little "statistical significance" can be found.
Unreliable junk correlations can always be found by people searching for such correlations in small data sets involving a small number of subjects such as 15. Such correlations will dissolve like the morning mist once a much larger set of subjects is tested. In general, we should have no confidence in any brain scan study that used only a dozen or two subjects in any of its study groups. Unfortunately, the great majority of such brain scan studies fall into such a category.
The doctor cites the following, an indication that many brain scan papers may not even match the data collected:
"The experience of the Editor-in-Chief of Molecular Brain is also relevant to consider when assessing the merits of brain scanning studies in psychiatry. In 2020, he described what happened when he requested to see the raw data in 41 of the 180 manuscripts he had handled. Upon his requests, 21 of the 41 manuscripts were withdrawn by the authors, and he rejected a further 19 'because of insufficient raw data,' which suggested that the raw data might not exist, at least for some of the cases. Thus, only 1 of 41 papers (2%) passed his reasonable test."
On another page the same doctor states this about attempts to show a genetic basis for mental illness:
"Many billions of dollars have been spent by the US National Institute for Mental Health (NIMH) on finding genes predisposing to psychiatric diseases and on finding their biological causes. This has resulted in thousands of studies of receptors, brain volumes, brain activity, and brain transmitters. Nothing useful has come out of this enormous investment apart from misleading stories about what the research showed. This might have been expected from the outset. It is absurd, for example, to attribute a complex phenomenon like depression or psychosis or attention deficit and hyperactivity to one neurotransmitter when there are more than 200 such transmitters in the brain that interact in a very complex system we don’t understand."
The doctor dismisses claims that ADHD (Attention Deficit Hyperactivity Disorder) is caused by smaller brains:
"The study that claimed that children with an ADHD diagnosis have small brains has been widely condemned. Lancet Psychiatry devoted an entire issue to criticisms of the study. Allen Frances, chair of the DSM-IV task force (DSM is the Diagnostic and Statistical Manual of Mental Disorders, issued by the American Psychiatric Association), and Keith Conners, one of the first and most famous researchers on ADHD, re-analysed the data and found no brain differences."
The doctor points out that many of the researchers claiming brain links to mental illnesses have financial conflicts of interests, which can happen when a researcher receives money (directly or indirectly) from some pharmaceutical manufacturer who stands to profit when scientists make "brain problems cause mental illness" claims. On another page of the Mad in America site, we read this: "A study published in the Community Mental Health Journal finds that two-thirds of psychopharmacology textbooks have authors and/or editors that receive payments from pharmaceutical companies." We read of 11 million dollars paid to "11 of 21 editors/authors over a seven-year period."
Pharmaceutical manufacturer money is only part of the reason for regarding the typical experimental neuroscientist as being someone like a bribed juror. Today's scientists live in a "publish or perish" culture in which scientists are judged by how many papers they get published and how many citations such papers get. A scientist will be far more likely to get the prized research grant money if he proposes an experiment that might help to confirm some existing dogma about the brain, rather than an experiment that might produce results conflicting with such dogmas. Also, a scientist who finds no link between brain scans and some mental state has to report what is called a negative result or null result. But many journals have a policy of favoring papers reporting a positive result. So such a scientist has a great incentive to fiddle with his data analysis pipeline until some positive result can be claimed. The more the reported result fits in with prevailing dogmas of neuroscientists, the more likely the paper will be to get published, and the more paper citations the scientist will get. The more some ambiguous or borderline or questionable result is described in a paper title or abstract as showing a clear and important result, the more the authors will get the prized paper citations. Being part of such an ecosystem in which only results claiming to support prevailing dogmas are rewarded, such a scientist may be no impartial judge of truth, but more like a juror bribed to reach a particular conclusion.
Hi. Do you have any thoughts on the moderately recent and hyped up Attention Schema Theory by Michael Graziano? It's praised both because of it's ''non-magical'' approach to consciousness and having the framework for mind uploading/AGI.
ReplyDeleteIn a 2016 post I note a few silly-sounding statements by Graziano:
ReplyDeletehttps://futureandcosmos.blogspot.com/2016/01/folly-of-consciousness-deniers.html
The theory you mention is described below:
https://aeon.co/essays/how-consciousness-works-and-why-we-believe-in-ghosts
There we read him state, "It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention." That does not sound like something worth studying.
Mind uploading is based on the idea that your mind consists of states that can be copied from a brain, one that I reject entirely because of reasons discussed in the posts on this blog. A proper study of the mind and brain (and psychical phenomena) will leave you thinking that we need not worry about an impossibility of mind uploading. Since you mind is not produced by your brain, and does not consist of brain states, you need not worry about losing your self when your brain stops working.