There recently appeared a study attempting to measure how well different neuroscientist theories about intelligence performed when trying to predict intelligence from brain scans. The theories were all just minor variations of the idea that intelligence is purely a product of the brain. All of the neuroscience theories tested flunked this test very badly. But the press release announcing the study failed to mention this big flop, and merely gave us a headline announcing that one of the theories performed better than the others.
The study was entitled "Investigating cognitive neuroscience theories of human intelligence: A connectome-based predictive modeling approach." The study used a surprisingly high number of subjects, about 300. In this respect the study was very different from the great majority of experimental neuroscience studies using brain scanning, which routinely use way too-small study group sizes. Nowadays experimental neuroscience studies mostly display an appalling failure to follow sensible standards. There is no standard being used for the minimum number of subjects that must be used. The great majority of published experimental neuroscience studies are junk science studies that use way too-small study group sizes, typically fewer than 15 subjects per study group. The results reported in such studies are mainly noise and false alarms. Do not ever make the very large mistake of assuming that an experimental neuroscience study must have been good science if it passed peer review and got published in a major science journal. Nowadays peer reviewers are letting all kinds of junk studies and poorly designed research get published in leading neuroscience journals. The peer reviewers of neuroscience journals are typically scientists who themselves wrote papers using Questionable Research Practices such as a lack of a blinding protocol, unreliable techniques for measuring animal fear, and way-too-small study group sizes. Such peer reviewers are reluctant to exclude papers for committing the same sins that were committed in the papers authored by the peer reviewers themselves. It's kind of like a situation in which tax cheaters who cheat on their taxes every year are in charge of auditing tax returns by other people.
In the study "Investigating cognitive neuroscience theories of human intelligence: A connectome-based predictive modeling approach" about 300 subjects were given a large variety of cognitive tests. The same subjects had their brains scanned. From features detected in brains, a group of neuroscience theories were used to make predictions about how well the subjects should have performed in intelligence tests. Graphs were created showing how well these predictions matched reality.
The neuroscience theories tested against reality included the following:
(1) A "lateral PFC" theory assuming that intelligence mainly comes from the prefrontal cortex.
(2) A "Parieto-Frontal Integration" theory that "proposes that connectivity of a distributed frontoparietal network accounts for intelligence by enabling the integration of knowledge between frontal and parietal areas to support hypothesis generation and problem solving."
(3) A "Multiple Demand" theory that "incorporates more recent advances in understanding the network architecture of general intelligence by appealing to an even broader network of frontoparietal and cinguloopercular regions."
(4) A "Process Demand" theory that "provides a novel framework centered on the idea that general intelligence reflects the engagement of multiple cognitive processes represented by the overlap (or shared connections) among brain networks."
(5) A "Network Neuroscience" theory that proposes that intelligence "emerges from individual differences in the network topology and dynamics of the human connectome."
The paper has some graphs showing how well these theories predicted intelligence. We get two main types of graphs: scatter plot graphs and correlation graphs shown as bar graphs.
Before discussing the results, I must give a little primer on scatter plot graphs involving correlation. A scatter plot shows data items for which two numbers have been collected. For example, if you kept track of how much ice cream was sold on a store, while recording the temperature of each day, you could make a nice scatter plot comparing sales on the different days, and the temperature on each day; and you would see a nice correlation between hot weather and ice cream sales. When there is a strong correlation, a scatter plot will look something like the graph below, showing a very clear correlation:
When there is very little or no correlation, a scatter plot will look something like the graph below, with the points scattered all over the graph, with the points showing no very clear line:
Graph 2: A scatter plot showing little or no correlation
The study "Investigating cognitive neuroscience theories of human intelligence: A connectome-based predictive modeling approach" has some scatter plots showing how well the various "brains make minds" models performed. The scatter plots all look like Graph 2 above, and show the models flunking the test by performing very poorly at predicting intelligence.
Figure 4 of the paper shows the scatter plot below, where we see a failure of the "lateral PFC" model to perform impressively, without any clear trend line:
A bar graph next to this graph shows us that the predictive performance is dismal, with the performance seeming to be worst than what we would expect from mere guessing. Figure 5 of the paper looks like the scatter plot shown above, and shows very bad predictive performance of the "Parieto-Frontal Integration" theory, with no clear trend line. Figure 6 of the paper looks like the scatter plot shown above, and shows very bad predictive performance of the "Multiple Demand" theory, with no clear trend line.
Discussing the "Process Overlap" theory, the paper tells us that "we find evidence that whole-brain functional edges do a relatively poor job at predicting g [intelligence] compared with other connectivity profiles, with the best-performing model (Figure 7a) generating predictions of r = .11." The r is a measurement of correlation, which can vary from r = 0 (no correlation) to r = 1 (perfect correlation). A correlation of only .11 is a negligible correlation. As a general rule of thumb, there is no good evidence of a causal relation unless you find some r value greater than .3, and the evidence for a relation is weak unless the r value is .5 or greater.
Finally the paper comes to displaying the performance of the theory that supposedly produces "the most robust predictions of general intelligence" of the theories: the "Network Neuroscience" theory. Unfortunately, the performance of this "best of the lot" winner is dismal. Figure 11 of the paper gives us this scatter plot showing the performance of this "Network Neuroscience" theory:
Again, we see a scatter plot failing to show any clear trend line. The bar graph included with this scatter plot further clarifies how badly the "Network Neuroscience" theory performs. In that bar graph we see that with most versions of the theory, the correlation level is actually less than 0, with a negative correlation. That equals worst results than you would get from random guessing or throwing a dice.
The end of the "Investigating cognitive neuroscience theories of human intelligence: A connectome-based predictive modeling approach" fails to accurately characterize these extremely poor results from all of the models. We read multiple times a totally unjustified use of the phrase "reliable predictions of g [intelligence]" that does not match any of the graphs shown. The paper should have had a conclusion section mentioning the abysmal predictive failure of all of the models tested. Instead the paper ends with some unjustified language contradicting the data it displays. It's as if the authors failed to study their own graphs, or failed to accurately describe them. This is what happens very frequently in today's neuroscience literature: authors making claims (particularly in paper titles and paper abstracts) that do not match the data they have collected. The very marginal and very weak association between cognitive scores and brains shown by a small subset of the data can easily be explained by factors having nothing to do with intelligence, because brain differences can cause things such as differences in perceptual ability, differences in muscle speed, and differences in manual dexterity, all of which can affect IQ test scores.
The press release of the study gives us this headline: "Study: Network neuroscience theory best predictor of intelligence." An accurate headline would have been this: "Models Assuming Brain-Based Intelligence All Flunk a Large Brain Scan Test." The reported results are quite consistent with the idea that your brain does not make your mind. The press release basically does a cover-up job, by failing to mention the very bad predictive performance of all of the theories.
We hear quotes from a neuroscientist who fails to mention the very bad failure of all of the "brains make minds" theories when predicting intelligence from brain scans. Instead the neuroscientist gives us a little empty hand-waving by trying to explain problem-solving by mentioning "connections." A connection of brain cells does nothing to explain problem solving or intelligence. We know of countless highly-connected things that are utterly mindless, like the atoms in a crystal lattice. The paper I have discussed suggests there is no robust correlation between brain connections and intelligence.
The result should come as no surprise, as it matches a previous study of brain connectivity. The study was announced on the Science Daily web site with this headline: "MRI scans of the brains of 130 mammals, including humans, indicate equal connectivity."
We read the following:"Researchers at Tel Aviv University, led by Prof. Yaniv Assaf of the School of Neurobiology, Biochemistry and Biophysics and the Sagol School of Neuroscience and Prof. Yossi Yovel of the School of Zoology, the Sagol School of Neuroscience, and the Steinhardt Museum of Natural History, conducted a first-of-its-kind study designed to investigate brain connectivity in 130 mammalian species. The intriguing results, contradicting widespread conjectures, revealed that brain connectivity levels are equal in all mammals, including humans."
No comments:
Post a Comment