A 2016 story article in Vox.com was a remarkable confession of how bad things are in the world of scientific research. The article was entitled "The 7 biggest problems facing science, according to 270 scientists." We read this:
"We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science....Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public."
The author writes about what is called publication bias, the tendency of science journals to prefer publishing studies that report finding some positive effect, rather than studies that fail to report such an effect, finding what is called a null result. We read this:
"Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase 'publish or perish' hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side."
The statement won't be very clear to the average reader. What the author means by "studies that fail" is not studies that fail to follow good science practice or that fail to be completed, but instead studies that fail to report a positive result. An example would be a study testing for whether removing one particular gene from mice has an effect on their memory, and which reports no effect from such a removal. By "failed studies can mean career death" the author means studies that failed to report a positive effect and that then did not get published, because of publication bias in which positive results are preferred. A scientist doing enough of such studies that were not published might end up with a low count of published papers.
We read about conflicts of interest:
"Already, much of nutrition science, for instance, is funded by the food industry — an inherent conflict of interest. And the vast majority of drug clinical trials are funded by drugmakers. Studies have found that private industry–funded research tends to yield conclusions that are more favorable to the sponsors."
Such conflicts of interest taint neuroscience research, because a large fraction of neuroscience research is funded (directly or indirectly) by pharmaceutical companies and biotech device manufacturers hoping to produce some result they can claim is scientific evidence in favor of some pill or device they are selling. We read in the article that some professors are spending up to 50% of their time writing research grant proposals.
We get this quote:
"'As it stands, too much of the research funding is going to too few of the researchers,' writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. 'This creates a culture that rewards fast, sexy (and probably wrong) results.' ”
A culture that rewards wrong results? How messed up is that?
We read the following:
"The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more 'revolutionary.' (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)
Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others....Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.
'I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,' writes Jess Kautz, a PhD student at the University of Arizona. 'And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work '.”
We read a quote from a Joseph Hilgard who says, "The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.” We read the claim that 85 percent of research is "routinely wasted on poorly designed and redundant studies." We read the claim that up to 30 percent of research turns out to be wrong or consist of exaggerated results.
We read about how badly published results fail to be replicated. We have a big boldface section header saying this:
"Replicating results is crucial. But scientists rarely do it."
We get this example
"The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all."
We have this statement about misleading science journalism and misleading university press releases:
"Science journalism is often full of exaggerated, conflicting, or outright misleading claims...Sometimes bad stories are peddled by university press shops....Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice....The 'toxic dynamic' of journalists, academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step."
The long 2016 Vox article mentions some ways that this sorry state of science research could be improved. But in the ten years since the article was published, there has been no improvement in the sorry state of science research. All the problems discussed in the 2016 article are still there, and still exist as badly as they existed in 2016. There is no evidence that research scientists and science journalists are improving their dysfunctional and defective methods. The many severe problems mentioned are only part of the problems that exist. Many other severe problems in science research and science journalism are not mentioned in the Vox article, such as these:
(1) The tendency of scientific researchers to try to do research that supports prevailing dogmas of scientists, which are often groundless dogmas or poorly supported dogmas, rather than to do objective research that takes a "follow the evidence wherever it leads" approach.
(2) The strong economic motivations that underlie misleading clickbait headlines, motivations such as the desire to produce page views that are profitable because of revenue-generating ads on such pages.
(3) The use of way-too-small study group sizes in fields such as neuroscience, resulting mostly in unreliable "false alarm" results.
(4) The use of poor methods of measurement in fields such as neuroscience, such as the widespread use of unreliable judgments of "freezing behavior" to judge fear or recall in rodents, rather than other more reliable methods.
(5) A failure to follow a detailed blinding protocol.
(6) The extensive use of "keep torturing the data until it confesses" tactics, in which scientists fail to commit themselves to one straightforward method of gathering data and analyzing data, and instead act as if they had a license to endlessly play around with data, subjecting data to the most bizarre and convoluted arbitrary analysis pathways, that end up distorting and contorting the data gathered.
When such problems exist in abundance, neuroscientists are largely engaging in a sham and a scam when they take federal money and pretend to be engaging in rigorous experimental science.
Research science and science journalism are broke, and there is no sign that they are slowly mending themselves.
A recent article on the Retraction Watch site is captured in the screen shot below. Notice the graph showing that the growth of fake or shoddy "paper mill" papers is stronger than the growth of regular scientific papers.




No comments:
Post a Comment