Quanta Magazine is a widely-read online magazine with slick graphics. On topics of science the magazine again and again is guilty of the most glaring failures. Quanta Magazine often assigns its online articles about great biology mysteries (involving riddles a thousand miles over the heads of PhDs) to writers who lack even a bachelor's degree in biology. Often it will assign such articles to be written by people identified as "writing interns." The articles at Quanta Magazine often contain misleading prose, groundless boasts or the most glaring falsehoods. I discuss some examples of such poor journalism in my posts here and here and here and here.
The writers at Quanta Magazine are very often guilty of bootlicking, a word meaning excessive deference to an authority or a superior. The latest example of bootlicking at the magazine is an article entitled "How 'Event Scripts’ Structure Our Personal Memories." The subtitle makes this very untrue claim: "By screening films in a brain scanner, neuroscientists discovered a rich library of neural scripts — from a trip through an airport to a marriage proposal — that form scaffolds for memories of our experience." The claim has no basis in fact. The article follows it with this equally untrue claim: " 'Event scripts' are distinct neural fingerprints that encode repeated sequences of events, such those that unfold during a trip through the airport." No such things have been found.
The article begins telling us tall tales about neuroscientist Christopher Baldassano, incorrectly stating this: "Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. " No such thing happened. The article is referring to a very low-quality paper co-authored by Baldassano, one entitled "Representation of Real-World Event Schemas during Narrative Perception."
The study had the following flaws:
(1) The study group sizes in this task-based fMRI study were skimpy, consisting of only 15 or 16 subjects per study group. Referring to study group sizes twice as large, an article on neursosciencenews.com states this: "A new analysis reveals that task-based fMRI experiments involving typical sample sizes of about 30 participants are only modestly replicable. This means that independent efforts to repeat the experiments are as likely to challenge as to confirm the original results."
(2) No blinding protocol was used.
(3) The paper was not preregistered, and did not test any hypothesis formulated before gathering data, using a method specified before gathering data.
(4) The paper is a bad example of "keep torturing the data until it confesses" methodology. The paper has graphs that are not based on simple brain scans, but are instead based on brain scan data after it has been manipulated through the most convoluted pathway of arbitrary contortions.
Below from the paper is a discussion of only a small fraction of the "keep torturing the data until it confesses" nonsense that was occurring:
"For each story, four regressors were created to model the response to the four schematic events, along with an additional nuisance regressor to model the initial countdown video. These were created by taking the blocks of time corresponding to these five segments and then convolving with the HRF from AFNI (Cox, 1996). A separate linear regression was performed to fit the average response of each group (in the 100-dimensional SRM space) using the regressors, resulting in a 100-dimensional pattern of coefficients for each event of each story in each group. For every pair of stories, the pattern vectors for each of their corresponding events were correlated across groups (event 1 from Group 1 with event 1 from Group 2, event 2 from Group 1 with event 2 from Group 2, etc., as shown in Fig. 2a) and the four resulting correlations were averaged. This yielded a 16 X 16 matrix of across-group story event similarity. To ensure robustness, the whole process was repeated for 10 random splits of the 31 subjects, and the resulting similarity matrices were averaged across splits...To explore the dimensionality of the schematic patterns, we reran the analysis after preprocessing the data with a range of different SRM dimensions, from 2 to 100. The resulting curve of z values versus dimensionality for each region was then smoothed with the LOWESS (Locally Weighted Scatterplot Smoothing) algorithm implemented in the statsmodels python package (using the default parameters). To generate the searchlight map, a z value was computed for each vertex as the average of the z values from all searchlights that included that vertex. The map of z values was then converted into map of q values using the same false discovery rate correction that is used in AFNI (Cox, 1996)....The resampled data (time courses on the left and right hemispheres, and in the subcortical volume) were then read by a custom python script, which implemented the following preprocessing steps: removal of nuisance regressors (the 6 degrees of freedom motion correction estimates, and low-order Legendre drift polynomials up to order [1 duration/150] as in Analysis of Functional NeuroImages [AFNI]) (Cox, 1996), z scoring each run to have zero mean and SD of 1, and dividing the runs into the portions corresponding to each stimulus. All subsequent analyses, described below, were performed using custom python scripts and the Brain Imaging Analysis Kit (http://brainiak. org/)."
This is only a small fraction of the contortion inanity that was going on. The paper has many other paragraphs sounding like the one just quoted. To see the ugliness of the manipulation muddle that was occurring, you must look at the programming code. The authors have made their code public, and you can see it using the link here. Looking at their programming scripts, we see an appalling example of arbitrary, unjustifiable algorithms, the most convoluted spaghetti code. The brain scan data is being passed through many types of poorly documented programming loops that are doing God-only-knows-what kind of mystifying manipulation. Below is only a tiny part of the bizarre manipulations that were going on.
You might call this "iteration inanity." The output is some kind of utterly artificial "witches' brew" that cannot be called the original data gathered or anything like the original data gathered. We should not have any confidence in any of the main graphs in the paper, because they are all produced by passing brain scan data through spaghetti code convolution contortions similar to the one shown above. This is a severe example of "keep torturing the data until it confesses," what we might call a Spanish Inquisition level of torturing. We have some utterly artificial transmogrification mess that is the result of obscure arbitrary programming manipulations, some gobbledygook rigmarole. The authors have not found any "event scripts" or patterns in the brain. The only thing they have found is something they have created themselves by spaghetti-code programming that distorts and manipulates the original brain scan data.
Nothing real about the brain is being revealed here. If any "scripts" or patterns were discovered, the authors were merely discovering the outputs of their own data-manipulating programming loops. To claim the output of such distortion loops as being something in the brain is as misleading as picking up 100 stones from the seashore, forming them into a sculpture of a cat, and then claiming that the waves produced a sculpture of a cat.
It is rather obvious that our Quanta Magazine writer has not learned how to distinguish good neuroscience research from very bad neuroscience research. That writer states this:
"In 2004, the neuroscientist Uri Hasson and his colleagues at the Weizmann Institute of Science in Israel started carving a path through the thicket of voxels. In one of their studies, five people, while lying in a brain scanner, watched 30 minutes of The Good, the Bad and the Ugly (1966), a spaghetti western starring Clint Eastwood. Comparing the data from the five participants, the researchers noted when and where brain activity surged or waned in unison."
Why would anyone even bother to mention a research study using so obviously too-small a study group size of only five subjects? The writer then gives us one more bum steer. We are given false claims about a study by a neuroscientist named Chen:
"In 2012, Chen joined Hasson’s lab, then at Princeton, and extended the approach to memory. She had people watch the first episode of the television show Sherlock (2010), featuring Benedict Cumberbatch as a modern take on the legendary detective. Then the study participants talked through their memory of it, while still lying in the scanner. The experiment worked. Chen and her colleagues were able to match brain activity recorded during participants’ recollections to specific scenes around 60 seconds long — for example, when Sherlock meets Watson."
The claim is false, because the study was some very low-quality work. The link given in the Quanta Magazine article is to the paper "Shared memories reveal shared structure in neural activity across individuals" which you can read here. The study had the following defects:
- The study used too-small study groups such as one with only 8 subjects and another with only 9 subjects. The authors confess, "No statistical methods were used to pre-determine sample sizes but our sample sizes are similar to those reported in previous publications." It is well-known that neuroscience experiments typically use way too few subjects for results with good statistical power, so you do not have a good excuse for failing to do a sample size calculation (to determine a good study group size) by appealing to other experimenters using study group sizes like yours.
- The study failed to use a blinding protocol. The authors confess, "Data collection and analysis were not performed blind to the conditions of the experiments."
- Instead of simply using the original brain scan data, the authors performed very many obscure and arbitrary convolutions, contortions and distortions of the original data.
No comments:
Post a Comment