Friday, October 30, 2020

Inaccurate Titles and Misleading Citations Are Common in Science Papers

 I have discussed at some length on this blog problems in science literature such as poor study design, insufficient study group size, occasional fraud, misleading visuals and unreliable techniques for fear measurement. Such things are only some of the many problems to be found in neuroscience papers. Two other very common problems are:

(1) Scientific papers often have inaccurate titles, making some claim that is not actually proven or substantiated by the research discussed in the paper.

(2) Scientific papers often make misleading citations to papers that did nothing to show the claim being made. 

Regarding the first of these problems, scientists often write inaccurate titles to try to get more citations for their papers. For the modern scientist, the number of citations for papers he or she wrote is a supremely important statistic, regarded as a kind of numerical "measure of worth" as important as the batting average or RBI statistic is for a baseball hitter. At a blog entitled "Survival Blog for Scientists" and subtitled "How to Become a Leading Scientist," a blog that tells us  "contributors are scientists in various stages of their career," we have an explanation of why so many science papers have inaccurate titles:

"Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals,  you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content."

citation mania
Is this how today's scientists are trained?

A study of inaccuracy in the titles of scientific papers states, "23.4 % of the titles contain inaccuracies of some kind."

The concept of a misleading citation is best explained with an imaginary example.  In a scientific paper we may see some line such as this:

Research has shown that the XYZ protein is essential for memory.34

Here the number 34 refers to some scientific paper listed at the end of the scientific paper. Now, if the paper listed as paper #34 actually is a scientific paper showing the claim in question, that this XYZ protein is essential for memory, then we have a sound citation. But imagine if the paper does not show any such thing. Then we have a misleading citation.  We have been given the wrong impression that something was established by some other science paper. 

A recent scientific paper entitled "Quotation errors in general science journals" tried to figure out how common such misleading citations are in science papers.  It found that such erroneous citations are not at all rare. Examining 250 randomly selected citations, the paper found an error rate of 25%.  We read the following:

"Throughout all the journals, 75% of the citations were Fully Substantiated. The remaining 25% of the citations contained errors. The least common type of error was Partial Substantiation, making up 14.5% of all errors. Citations that were completely Unsubstantiated made up a more substantial 33.9% of the total errors. However, most of the errors fell into the Impossible to Substantiate category."

When we multiply the 25% figure by 33.9%, we find that according to the study, 8% of citations in science papers are completely unsubstantiated. That is a stunning degree of error. We would perhaps expect such an error rate from careless high-school students, but not from careful scientists. 

This 25% citation error rate found by the study is consistent with other studies on this topic. In the study we read this:

"In a sampling of 21 similar studies across many fields, total quotation error rates varied from 7.8% to 38.2% (with a mean of 22.4%) ...Furthermore, a meta-analysis of 28 quotation error studies in medical literature found an overall quotation error rate of 25.4% [1]. Therefore, the 25% overall quotation error rate of this study is consistent with the other studies."

In the paper we also read the following: "It has been argued through analysis of misprints that only about 20% of authors citing a paper have actually read the original."  If this is true, we can get a better understanding of why so much misinformation is floating around in neuroscience papers.  We repeatedly have paper authors spreading legends of scientific achievement, which are abetted by incorrect paper citations often made by authors who have not even read the papers they are citing.  

A recent article at Vox.com suggests that scientists are just as likely to make citations to bad research that can't be replicated as they are to make citations to good research. We read the following:

"The researchers find that studies have about the same number of citations regardless of whether they replicated. If scientists are pretty good at predicting whether a paper replicates, how can it be the case that they are as likely to cite a bad paper as a good one? Menard theorizes that many scientists don’t thoroughly check — or even read — papers once published, expecting that if they’re peer-reviewed, they’re fine. Bad papers are published by a peer-review process that is not adequate to catch them — and once they’re published, they are not penalized for being bad papers."

We also read the following troubling comment:

"Blatantly shoddy work is still being published in peer-reviewed journals despite errors that a layperson can see. In many cases, journals effectively aren’t held accountable for bad papers — many, like The Lancet, have retained their prestige even after a long string of embarrassing public incidents where they published research that turned out fraudulent or nonsensical...Even outright frauds often take a very long time to be repudiated, with some universities and journals dragging their feet and declining to investigate widespread misconduct."

No comments:

Post a Comment