The year 2024 ends today, and we can ponder how neuroscientists have made no progress this year in trying to substantiate the main belief dogmas of their conformist belief community, such as the dogma that brains make minds and the dogma that brains store memories. A story yesterday at the LiveScience.com site was entitled "15 times the brain blew our minds in 2024." The article attempts to tell us about the biggest research advances in neuroscience in the year 2024. We get no mentions of any very impressive results that are examples of robust science shedding light on human brains.
There is a mention of the research claiming to have found 3 copies of a memory, a claim that is debunked in my post here entitled "No, They Didn't Find in a Brain '3 Copies of Every Memory,' and They Never Even Found One." The research study in question was a very bad example of Questionable Research Practices, such as using way-too-small study group sizes such as a group of only four mice. The LiveScience article also has an "Origin of Psychosis" section that incorrectly describes a brain scan study by saying, "Using artificial intelligence (AI) to analyze the scans, scientists found overlapping 'signatures' in the brains of people with psychosis." No, the actual study analyzed brain signatures of about 100 people with a rare genetic mutation that merely increases the risk of schizophrenia. Such a mutation occurs in one in 4000 people. Then the article has a teaser title of "Conscious Lab Grown Brains?" But we are assured this will not happen any time soon
As a representative snapshot of the dismal state of neuroscience research, let's look at some research referred to today in an article on the RealClearScience.com site. We have a link to an article with the misleading headline "Why your sleeping brain replays new rewarding experiences." There is no actual evidence that memory recall occurs because of brain activity, and there is no evidence at all that in sleep people tend to have dreams of rewarding experiences. The article refers us to a study published in the journal Nature, a study that is an example of very low-quality neuroscience research.
The study is entitled "Reward biases spontaneous neural reactivation during sleep," and had a very silly design. A small group of people (a starting group of only 18) played video games while having their brains scanned, and were then put in a fMRI brain scanner, and told to fall asleep. The idea was to study brain scans and look for signs that people were replaying during sleep rewarding moments during their video game experiences. This idea was ludicrous. You cannot tell what people are thinking or remembering or dreaming by looking at brain scans.
A starting group of 18 people were used, but some of these were disqualified because they did not fall asleep or did not win in the game. The remaining group was a study group size of only 13 people. A study group size of only 13 people is way too small for a reliable result in a correlation-seeking study like this. The paper "Prevalence of Mixed-methods Sampling Designs in Social Science Research" has a Table 2 giving recommendations for minimum study group sizes for different types of research. According to the paper, the minimum number of subjects for an experimental study are 21 subjects per study group. The same table lists 61 subjects per study group as a minimum for a "correlational" study.
In her post “Why Most Published Neuroscience Findings Are False,” Kelly Zalocusky PhD calculates that the median effect size of neuroscience studies is about .51. She then states the following, talking about statistical power (something that needs to be .5 or greater to be moderately convincing):
"To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous. To bump our power up to 0.5, we would need an n of 31 per group. A power of 0.8 would require 60 per group."
If we describe a power of .5 as being moderately convincing, it therefore seems that 31 subjects per study group is needed for an experimental neuroscience study to be moderately convincing. But most experimental neuroscience studies use fewer than 15 subjects per study group. And the study "Reward biases spontaneous neural reactivation during sleep" has used a way-too-small study group size of only 13. The authors would have discovered their inadequate sample sizes if they had done a sample size calculation, but they did not do such a thing (or at least they do not mention doing such a thing).
What went on is that the authors analyzed brain scans, looking for something they could claim is some faint trace of a memory replay or dream replay of some reward experienced during a video game. It was an affair of comparing brain scans during sleep, looking for some similarity somewhere between brain scans taken when someone had a video game reward. Since no blinding protocol was used, no pre-registration was used, and no control subjects were used, any claims to have found evidence of such a thing are worthless, particularly given the tiny study group size. It's just fishing-expedition see-what-you're-hoping-to-find pareidolia. The authors were free to slice and dice data in any way they wanted, until they found something they could claim as some support for their hypothesis, using any of endless possible analysis pathways. Similarly, a person eagerly scanning thousands of photos of clouds hoping to find some animal shape can report a few successes here or there. Nowadays people doing these kind of noise-mining fishing-expeditions are aided by correlation-seeking software that has a great ability to analyze data in a thousand-and-one different ways, and find false alarm correlations that are not caused by any causal relation. So it's ever-easier to do "keep torturing the data until it confesses" like the hypothetical duo below:
A sensible way to proceed in such a study would be to wake subjects up, and ask them if they had any dream that was anything like some dream of a reward. Nothing like that was done. The subjects were not asked what dreams they had. Of course, because if they had been asked that they would not have said anything about having dreams related to their video game experiences.
I have been recording my dreams every night for almost four years, and you can read about such dreams in my very long post here. It is not true that people tend to have dreams of rewarding experiences they have had, and it is not true that people tend to have dreams of rewarding experiences they had in video games. For nearly four years I have played video games for about an hour every night, just before sleeping; and during these same four years I have recorded my dreams throughout the night, as soon as I awoke and remembered a dream (I awake quite a few times each night). I have had very many rewarding experiences during such video game playing (such as regularly advancing to new levels and overcoming hard challenges). But I have never noticed any tendency whatsoever for my dreams to be about video game experiences. And I notice no tendency at all for my dreams to be about rewarding experiences I have had. I can never recall ever having any dream that seemed to be inspired by what I had experienced in a video game.
We should note well the needless potential risk to subjects occurring for the sake of this very-low-quality study. We are told subjects had their brains scanned for 40 minutes while playing video games, and had their brains scanned for an average of nearly two hours while they were sleeping: "The sleep session lasted between 51 min and 2 h 40 min (mean: 1 h 43 min)." We are told "The two runs of the game session comprised 615 scans and 603 scans, respectively, and the run of sleep session, for the data used in the analyses reported in the main text, comprised on average 2789 scans (between 1459 and 3589 scans)." An average brain scan for medical purposes requires not many minutes of scanning. and according to the page here, the default is only 32 scans: "the default of 32 slices will cover most of the brain in most subjects." Here we have hours of medically unnecessary brain scanning that may have subjected the subjects to needless risks, the type of risks discussed in my post here, entitled "Poorly Designed Brain Scan Experiments Needlessly Put the Needy at Risk." There is also an additional potential for trauma when someone wakes up in a brain scanner. The subjects may have been subjected to substantial risk, only for the sake of a study so poorly designed that it fails to produce robust evidence to back up its claims. The scanning was done with 3T scanners twice as intense as the 1.5T brain scanners that have been used for most medical brain scanning. Some neuroscientists are starting to use 7T brain scanners, despite the lack of adequate data on the long-term safety of scanning at such an intensity.
According to the paper here ("The effects of repeated brain MRI on chromosomal damage") which
judged genetic damage from 3T MRI scans, "While we do not report any change after a single MRI session, repeated exposure was associated with an increase in the frequency of chromosomal deletions." The paper "Genotoxic effects of 3 T magnetic resonance imaging in cultured human lymphocytes" found that chromosomal aberrations (CA) increased in proportion to the length of time someone had a 3T MRI scan. It says, "the frequencies of CAs in lymphocytes exposed for 0, 45, 67, and 89 min were 1.33, 2.33, 3.67, and 4.67 per 200 cells, respectively." Such chromosomal deletions and aberrations probably increase cancer risk, or the risk of equally devastating problems.
In this study the brain scans seemed to have been completely unnecessary. You could have much better tested the hypothesis that recent reward memories are replayed during sleep by using a decent study group size of 30 subjects, having them play reward-producing video games late at night, and then having the subjects sleep in the lab, with no brain scanning ever done. You could have woken up the subjects at different intervals in the night, and asked them about their dreams, to judge whether they were having dreams that were like their video game experiences or rewards of such games. How many neuroscientists are like the guy imagined below?
It seems like scan-o-mania
The journal Nature has published this very low-quality study. The same journal regularly publishes neuroscience research of very low quality. It is a huge mistake to think that neuroscience research is strong because it is published in journals such as Nature and Cell. These journals have for many years published many examples of neuroscience research of very low quality.
No comments:
Post a Comment