People often write idealized "knight in shining armor" portrayals of scientists. I read an example yesterday. A writer wrote this:
"We demand a lot from scientists. They are required to be objective, rigorous, and accurate, and to conduct their work free from the constraints of religion or politics. Few other areas of human endeavour are expected to be or valued as being so free from human error. At the same time, scientists are tasked with assessing and considering the potential consequences and applications of their work, and to act responsibly to maintain public trust in their whole system of knowledge."
Notice the underlying assumptions: that scientists actually are "objective, rigorous and accurate" and are pretty much "free from human error," and that they "act responsibly to maintain public trust in their whole system of knowledge." I have some news flashes for the author quoted above:
- In a scientific paper two authors stated this about the psychology and neuroscience literature: "False report probability is likely to exceed 50% for the whole literature." They also stated that "in light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience."
- A paper about brain-wide association studies says, "Irreproducible, inflated effect sizes were ubiquitous, no matter the method (univariate, multivariate)."
- Noting a failure of neuroscientists to take replication seriously, in 2022 a scientist noted, "There are no completed large-scale replication projects focused on neuroscience."
- Referring to extremely common neuroscience papers claiming activation of certain brain areas during some particular activity, a scientist stated, "Personally I’d say I don’t really believe about 95% of what gets published...I think claims of 'selective' activation [in brains] are almost without exception completely baseless."
- In a scientific paper, scientists stated, "Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference." Referring to "spin" in the abstract of scientific papers (questionable interpretations), the paper said, "Among the 128 assessed articles assessed, 107 (84 %) had at least one example of spin in their abstract," with 53% of the abstracts containing spin about causality.
- Referring to the problem of scientists incorrectly citing papers as if they supported some claim they did not actually support, a paper tells us that about 25% of the citations of papers in science papers are in error.
- Referring to publication bias, a brain scientist notes that "reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results," and that consequently "scientists go to great lengths to hype up their studies, lean on their analyses so they produce 'better' results, and sometimes even commit fraud in order to impress those all-important gatekeepers."
- A physicist states that "to impress editors and reviewers of high-impact journals," "you will have to hype up your title," and also "embellish your abstract," and perhaps also "deliberately confuse the reader about the content."
- In science fields such as neuroscience and cosmology and evolutionary biology, it is extremely common for scientists to assert unproven and implausible claims as if they were facts.
A "rinse and repeat" ritual for many professors
In the journal Science we read a story entitled "Fake Science Papers Are Alarmingly Common" the following:
"When neuropsychologist Bernard Sabel put his new fake-paper detector to work, he was 'shocked' by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%....His findings underscores what was widely suspected: Journals are awash in a rising tide of scientific manuscripts from paper mills -- secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship."
Referring to "red-flagged fake publications" as RFPs, a paper by Sabel ("Fake Publications in Biomedical Science: Red-flagging Method Indicates Mass Production") and three other authors (including 2 PhDs) states this:
"The results show a rapid growth of RFPs [red-flagged fake publications] over time in neuroscience (13.4% to 33.7%) and a somewhat smaller and more recent increase in medicine (19.4% to 24%) (Fig. 2). A cause of the greater rise of neuroscience RFPs may be that fake experiments (biochemistry, in vitro and in vivo animal studies) in basic science are easier to generate because they do not require clinical trial ethics approval by regulatory authorities."
Later we read this:
"Study 4 tested our indicators in an even larger sample of randomly selected journals included in the Neuroscience Peer Review Consortium. It redflagged 366/3,500 (10.5%) potential fakes."
Table 4 of the paper lists the percentage of RFPs (red-flagged fake publications) in neuroscience and medicine, by country:
- Publications from China have a "red-flagged fake" percentage of 55.8%.
- Publications from the USA have a "red-flagged fake" percentage of 7.3%.
- Publications from India have a "red-flagged fake" percentage of 6.8%
- Publications from Europe have a "red-flagged fake" percentage of 6.6%
The paper describes an economic ecosystem that helps explain such high numbers:
"The major source of fake publications are 1,000+ 'academic support' agencies – so-called 'paper mills' – located mainly in China, India, Russia, UK, and USA (Abalkina, 2021; Else, 2021; Pérez-Neri et al., 2022). Paper mills advertise writing and editing services via the internet and charge hefty fees to produce and publish fake articles in journals listed in the Science Citation Index (SCI) (Christopher, 2021; Else, 2022). Their services include manuscript production based on fabricated data, figures, tables, and text semi-automatically generated using artificial intelligence (AI). Manuscripts are subsequently edited by an army of scientifically trained professionals and ghostwriters. Although their quality is relatively low (Cabanac and Labbé, 2021), fake publications nevertheless often pass peer review in established journals with low to medium impact factors (IF 1-6) (Seifert, 2021)."
Another study found this:
"Of the 1,792 manuscripts for which the authors stated they were willing to share their data, more than 90% of corresponding authors either declined or did not respond to requests for raw data (see ‘Data-sharing behaviour’). Only 14%, or 254, of the contacted authors responded to e-mail requests for data, and a mere 6.7%, or 120 authors, actually handed over the data in a usable format. The study was published in the Journal of Clinical Epidemiology on 29 May."
A reasonable suspicion (in light of the Sabel paper described above) is that very many of these authors did not want to share data because some or most of their data had been faked.
No comments:
Post a Comment