A paper lamenting the lack of neuroscience progress is the paper "Why hasn't neuroscience delivered for psychiatry?" by David Kingdon, a professor of psychiatry. After noting some progress in medicine, Kingdon states the following:
"The major mental illnesses psychosis, bipolar disorder, anxiety disorders, anorexia nervosa and depression have proved remarkably resistant to similar developments. Unfortunately, it is still not possible to cite a single neuroscience or genetic finding that has been of use to the practicing psychiatrist in managing these illnesses despite attempts to suggest the contrary."
After noting the lavish funding that neuroscientists have long received in attempts to find a brain cause for mental illnesses, Kingdon states this:
"Why do we not have evidence of biological malfunctioning for severe mental disorders? Mental disorder can be caused by biological insults such as frontal lobe damage, dementia and delirium, but biological changes have yet to be shown to be relevant to the major mental disorders."
Talking about changes in the brain, Kingdon states this: "No such clear causative changes exist in severe mental illnesses such as depression, anxiety, bipolar disorder and schizophrenia." After noting "25 years of research frustration," Kingdon quotes a neuroscientist who advocates that we keep at this not-getting-much-of-anywhere research approach. Kingdon then states this:
"But does this not seem, after more than 30 years of failure, more akin to a religious or, albeit culturally influenced, persistent strong belief than one based on scientific grounds? Just where is the rational justification for ploughing the same furrow again and again?"
Kingdon then ends by stating this: "The time has come to challenge the justification for such relatively high levels of investment of time, expertise and resource in neuroscience for mental disorders."
I can give an answer to the question posed by Kingdon's paper, the question of, "Why hasn't neuroscience delivered for psychiatry?" The answer is that the main claims of neuroscientists about brains and minds are incorrect. Our minds are not produced by our brains as neuroscientists claim. So looking for neural causes of the main mental illnesses is an approach likely to fail. Once experts realize that mind is a fundamentally spiritual and psychic thing, they may start pursuing spiritual, social, psychological and psychic approaches to mental health treatment, approaches that may do far more for helping mental illness than neuroscientists have ever done.
Recently we had a paper showing the latest failure of scientists to show a neural basis for a commonly diagnosed mental condition. The condition studied was ADHD, which stands for Attention Deficit Hyperactivity Disorder. Children diagnosed with this condition may be observed as paying less attention in school, and may be so active and full of energy that they find it difficult to stay sitting at a desk for school lessons.
The paper (which you can read here) is entitled "Brain morphological changes across behaviour spectrums in attention deficit/hyperactivity disorder." The paper analyzed brain scans of 135 children and adolescents diagnosed with ADHD (Attention Deficit Hyperactivity Disorder) and also 182 "neurotypical controls."
Looking for differences in gray matter volumes (GMV) between the subjects with ADHD and the normal subjects, the study failed to find any difference. We read this: "Voxel-wise comparisons of GMV [gray matter volumer] between participants with ADHD and NCs [normal controls] revealed no significant differences, which contrasts with current understanding of the pathophysiological mechanisms underlying ADHD." In other words, the subjects diagnosed with attention deficit disorder did not have smaller amounts of gray matter in their brains. And their brains were not smaller.
The authors then resorted to the old "keep torturing the data until it confesses" trick that can be called "subgroup mining." This technique works like this: when a scientist fails to find a difference in the overall group of subjects, the scientist may look for some fraction of the group in which there is a difference. So, for example, if you are analyzing 100 American subjects and 100 Mexican subjects looking for a difference in intelligence, and you find no difference in the groups as a whole, you might then try to create a kind of "aroma of a difference" by reporting on a difference between one subgroup of American subjects that were smarter, and one subgroup of Mexican subjects that were less intelligent. This tactic of "subgroup mining" is in general misleading. It creates impressions of differences that are unwarranted impressions, because they do not correspond to the data found in the entire pool of subjects.
We read of the most convoluted statistical shenanigans going on to try to create subgroups by some insanely byzantine mathematical contortions. The excerpt below gives only a small fraction of the gobbledygook describing the "keep torturing the data until it confesses" nonsense that was going on:
"HYDRA, an advanced semisupervised learning algorithm, was applied to identify the ADHD neuroanatomical subtypes with brain regional volumes as features (online supplemental figure S1a).24 HYDRA involved the following steps: first, participants with ADHD were labelled as positive and NCs as negative. Second, a convex polyhedron with K planes (equal to the number of clusters) was constructed to separate ADHD from NCs. Then, an extended standard linear maximum-edge classifier was used to calculate the distance between each participant with ADHD and each hyperplane. Finally, each participant with ADHD was assigned to the nearest hyperplane, resulting in K clusters (ADHD subtypes). In the clustering process, we used a 10-fold crossvalidation strategy. In each cross-validation step, ninefold data were used for clustering. After 10-fold crossvalidation, each participant with ADHD had nine clustering labels. A total of 20 clustering consensus steps were then performed to determine the final clustering label for each participant with ADHD using a co-occurrence matrix generated from the nine labels. The number of clusters (K) was set from 2 to 10, and the optimal cluster number was determined using the Adjusted Rand Index (ARI).25 To test the effect of feature selection on subtyping, the following steps were performed for the 116 features: first, we used Levene’s test to examine the homogeneity of variance for each feature between the NC and ADHD groups, resulting in the exclusion of 20 features. Next, we employed the intraclass correlation coefficient (ICC) to assess the consistency of the remaining features across sites with an ICC threshold of 0.15, excluding 68 features. Finally, we removed features with an absolute correlation coefficient with the ADHD index <0.15, leaving 11 features."
The statistical funny business going on was actually vastly more complicated than what is mentioned above. After this methodological madness that cannot be justified by any straightforward and reasonable explanation, the authors ended up with a "subtype 1" which "showed increased GMV [gray matter volume] mainly in the frontal, parietal and temporal regions." There was also a "subtype 2" which either "showed no significant GMV [gray matter volume] differences compared with NC [normal controls]" or "exhibited GM [gray matter] reductions mainly in the bilateral cerebellum, insula, limbic systems, frontal, temporal and occipital regions."
So "it's a wash." One little fraction of the subjects with ADHD had more gray matter in their brains, and some other little fraction of the subjects with ADHD had less gray matter in their brains. It's what we might expect if gray matter differences are not the cause of ADHD.
It should be noted that the differences supposedly existing in this subgroup with a smaller gray matter volume was minimal, because we read of a statistical significance that is merely "p<0.05," which is the smallest difference that qualifies as "statistical significance" under current publication traditions. Whenever so low a level of statistical significance is reported, it is an example of what is criticized as "p-hacking," the reporting of a statistical significance so low that there is every reason to doubt any substantial difference exists. Many scientists say that the tradition of reporting these " "p<0.05" results as being "statistically significant" is a bad tradition of modern science, and that a more stringent criteria should exist, in which only results of ""p<0.01" or "p<.001" are reported as being "statistically significant."
The Google Gemini infographic visual below explains the concept of p-hacking in scientific experiments.
Do not be fooled by the statistical "funny business" p-hacking shenanigans that the authors of this paper engaged in. Their data is entirely consistent with the conclusion that there is no difference between the brains of those with ADHD (Attention Deficit Hyperactivity Disorder) and the brains of normal subjects. With this paper we should ignore all of the "keep torturing the data until it confesses" p-hacking nonsense that was going on, and focus instead on a single sentence of the paper, the sentence declaring, "Voxel-wise comparisons of GMV [gray matter volumer] between participants with ADHD and NCs [normal controls] revealed no significant differences, which contrasts with current understanding of the pathophysiological mechanisms underlying ADHD." In other words, there is no evidence that people with ADHD (Attention Deficit Hyperactivity Disorder) have different brains or fewer neurons than those without such a disorder. This result is consistent with my claim that the brain is not the source of the human mind.

No comments:
Post a Comment