Friday, February 21, 2020

Fraud and Misconduct Are Not Very Rare in Biology

"It’s maddening when you see people cheat. And even if it involves grant money from the NIH, there’s very little punishment. Even with people who have been caught cheating, the punishment is super light. You are not eligible to apply for new grants for the next year or sometimes three years. It’s very rare that people lose jobs over it.” -- Scientist Elizabeth Bik (link).

Without getting into the topic of outright fraud, we know of many common problems that afflict a sizable percentage of scientific papers. One is that it has become quite common for scientists to use titles for their papers announcing results or causal claims that are not actually justfied by any data in the papers. A scientific study found that 48% of scientific papers use "spin" in their abstracts. Another problem is that scientists may change their hypothesis after starting to gather data, a methodological sin that is called HARKing, which stands for Hypothesizing After Results are Known. An additional problem is that given a body of data that can be analyzed in very many ways, scientists may simply experiment with different methods of data analysis until one produces the result they are looking for. Still another problem is that scientists may use various techniques to adjust the data they collect, such as stopping data collection once they found some statistical result they are looking for, or arbitrarily excluding data points that create problems for whatever claim they are trying to show.  Then there is the fact that scientific papers are very often a mixture of observation and speculation, without the authors making clear which part is speculation.  Then there is the fact that through the use of heavy jargon, scientists can make the most groundless and fanciful speculation sound as if was something strongly rooted in fact, when it is no such thing. Then there is the fact that scientific research is often statistically underpowered, and very often involves sample sizes too small to be justifying any confidence in the results. 


All of these are lesser sins. But what about the far more egregious sin of outright researcher misconduct or fraud?  The scientists Bik, Casadevall and Fang attempted to find evidence of such misconduct by looking for problematic images in biology papers.  We can imagine various ways in which a scientific paper might have a problematic image or graph indicating researcher misconduct:

(1) A photo in a particular paper might be duplicated in a way that should not occur.  For example, if a paper is showing two different cells or cell groups in two different photos,  those two photos should not look absolutely identical, with exactly the same pixels. Similarly, brain scans of two different subjects should not look absolutely identical, nor should photos of two different research animals. 
(2) A photo in a particular paper that should be different from some other photo in that paper might be simply the first photo with one or more minor differences (comparable to submitting a photo of your sister, adjusted to have gray hair, and labeled as a photo of your mother). 
(3) A photo in a particular paper that should be original to that paper might be simply a duplicate of some photo that appeared in some previous paper by some other author, or a duplicate with minor changes.
(4) A photo in a particular paper might show evidence of being Photoshopped.  For example, there might be 10 areas of the photo that are exact copies of each other, with all the pixels being exactly the same. 
(5) A graph or diagram in a paper that should be original to that paper might be simply a duplicate of some graph or diagram that appeared in some previous paper by some other author. 
(6) A graph might have evidence of artificial manipulation, indicating it did not naturally arise from graphing software. For example, one of the bars on a bar graph might not be all the same color. 


research misconduct

There are quite a few other possibilites by which researcher misconduct could be identified by examining images, graphs or figures. Bik, Casadevall and Fang made an effort to find such problematic figures. In their paper "The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications," they report a large-scale problem.  They conclude, "The results demonstrate that problematic images are disturbingly common in the biomedical literature and may be found in approximately 1 out of every 25 published articles containing photographic image data."  

But there is a reason for thinking that the real percentage of research papers with problematic images or graphs is far greater than this figure of only 4%.  The reason is that the techniques used by Bik, Casadevall and Fang seem like rather inefficient techniques capable of finding only a fraction of the papers with problematic images or graphs.  They describe their technique as follows (20,621 papers were checked): 

"Figure panels containing line art, such as bar graphs or line graphs, were not included in the study. Images within the same paper were visually inspected for inappropriate duplications, repositioning, or possible manipulation (e.g., duplications of bands within the same blot). All papers were initially screened by one of the authors (E.M.B.). If a possible problematic image or set of images was detected, figures were further examined for evidence of image duplication or manipulation by using the Adjust Color tool in Preview software on an Apple iMac computer. No additional special imaging software was used. Supplementary figures were not part of the initial search but were examined in papers in which problems were found in images in the primary manuscript."

This seems like a rather inefficient technique which would find less than half of the evidence for researcher misconduct that might be present in photos, diagrams and graphs. For one thing, the technique ignored graphs and diagrams. Probably one of the biggest possibilites of misconduct is researchers creating artificially manipulated graphs not naturally arising from graphing software, or researchers simply stealing graphs from other scientific papers. For another thing, the technique used would only find cases in which a single paper showed evidence for image shenanigans. The technique would do nothing to find cases in which one paper was inappropriately using an image or graph that came from some other paper by different authors. Also, the technique ignored supplemental figures (unless a problem was found in the main figures). Such supplemental figures are often a signficant fraction of the total number of images and graphs in a scientific paper, and are often referenced in the text of a paper as supporting evidence. So they should receive the same scrutiny as the other images or figures in a paper. 

I can imagine a far more efficient technique for looking for misconduct related to imagery and graphs. Every photo, every diagram, every figure and every graph in every paper in  a very large set of papers on a topic (including supplemental figures) would be put into a database.  A computer program with access to that database would then run through all the images, looking for duplicates or near-duplicates in the images, as well as other evidence of researcher misconduct. Such a program might also make use of "reverse image search" capabilities available online.  Such a computer program crunching the image data could be combined with manual checks.  Such a technique would probably find twice as many problems.  Because the technique for detecting problematic images described by  Bik, Casadevall and Fang is a rather inefficient technique skipping half or more of its potential targets, we have reason to suspect that they have merely shown us the tip of the iceberg, and that the actual rate of problematic images and graphs (suggesting researcher misconduct) in biology papers is much greater than 4% -- perhaps 8% or 10%. 

A later paper ("Analysis and Correction of Inappropriate Image Duplication: the Molecular and Cellular Biology Experience") by Bik, Casadevall and Fang (along with Davis and Kullas) involved analysis of a different set of papers. The paper concluded that "as many as 35,000 papers in the literature are candidates for retraction due to inappropriate image duplication."  They found that 6% of the papers "contained inappropriately duplicated images." They reached this conclusion after examining a set of papers in the journal Molecular and Cellular Biology.  To reach this conclusion, they used the same rather inefficient method of their previous study I just cited. They state, "Papers were scanned using the same procedure as used in our prior study."  We can only wonder how many biology papers would be found to be "candidates for retraction" if a really efficient (partially computerized) method was used to search for the image problems, one using an image database and reverse image searching, and one checking not only photos but also graphs, and one also checking the supplemental figures in the papers.  Such a technique might easily find that 100,000 or more biology papers were candidates for retraction.

We should not be terribly surprised by such a situation. In modern academia there is relentless pressure for scientists to grind out papers at a high rate. There also seems to be relatively few quality checks on the papers submitted to scientific journals. Peer review serves largely as an ideological filter, to prevent the publication of papers that conflict with the cherished dogmas of the majority. There are no spot checks of papers submitted for publication, in which reviewers ask to see the source data or original lab notes or lab photographs produced in experiments.  The problematic papers found by the studies mentioned above managed to pass peer review despite glaring duplication errors, indicating that peer reviewers are not making much of an attempt to exclude fraud.  Given this misconduct problem and the items mentioned in my first paragraph, and given the frequently careless speech of so many biologists, in which they so often speak as if unproven claims or discredited claims are facts, it seems there is a significant credibility problem in academic biology. 

In an unsparing essay entitled "The Intellectual and Moral Decline in Academic Research," PhD Edward Archer states the following:

"My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields. Yet, more importantly, training in 'science' is now tantamount to grant-writing and learning how to obtain funding. Organized skepticism, critical thinking, and methodological rigor, if present at all, are afterthoughts....American universities often produce corrupt, incompetent, or scientifically meaningless research that endangers the public, confounds public policy, and diminishes our nation’s preparedness to meet future challenges....Universities and federal funding agencies lack accountability and often ignore fraud and misconduct. There are numerous examples in which universities refused to hold their faculty accountable until elected officials intervened, and even when found guilty, faculty researchers continued to receive tens of millions of taxpayers’ dollars. Those facts are an open secret: When anonymously surveyed, over 14 percent of researchers report that their colleagues commit fraud and 72 percent report other questionable practices....Retractions, misconduct, and harassment are only part of the decline. Incompetence is another....The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor. That failure means taxpayers are being misled by results that are non-reproducible or demonstrably false."

Justin T. Pickettt PhD has written a long illuminating post entitled "How Universities Cover Up Scientific Fraud." It states the following:

"I learned a hard lesson last year, after blowing the whistle on my coauthor, mentor and friend: not all universities can be trusted to investigate accusations of fraud, or even to follow their own misconduct policies. Then I found out how widespread the problem is: experts have been sounding the alarm for over thirty years. One in fifty scientists fakes research by fabricating or falsifying data....Claims that universities cover up fraud and even retaliate against whistleblowers are common....More than three decades ago, after spending years at the National Institutes of Health studying scientific fraud, Walter Stewart came to a similar conclusion. His research showed that fraud is widespread in science, that universities aren’t sympathetic to whistleblowers and that those who report fraudsters can expect only one thing: 'no matter what happens, apart from a miracle, nothing will happen.' ”

An Editor-in-Chief of the journal Molecular Brain has found evidence suggesting that a significant fraction of neuroscientists may not have the raw data backing up the claims in their scientific papers. He states the following:

"As an Editor-in-Chief of Molecular Brain, I have handled 180 manuscripts since early 2017 and  have made 41 editorial decisions categorized as 'Revise before review,' requesting that the authors provide raw data. Surprisingly, among those 41 manuscripts, 21 were withdrawn without providing raw data, indicating that requiring  raw data drove away more than half of the manuscripts. I rejected 19 out of the remaining 20 manuscripts because of insufficient raw data. Thus, more than 97% of the 41 manuscripts did not present the raw data supporting their results when requested by an editor, suggesting a possibility that the raw data did not exist from the beginning, at least in some portions of these cases....We really cannot know what percentage of those manuscripts have fabricated data....Approximately 53% of the 227 respondents from the life sciences field answered that they suspect more than two-thirds of the manuscripts that were withdrawn or did not provide sufficient raw data might have..fabricated the data."

Postscript: In the Daily Mail we read this:

"Carlisle refined his methods and has carried out two major investigations into research fraud.

By looking for 'too-good-to-be-true' patterns in the data, he found that one study in five published by one journal, Anaesthesia, was potentially fraudulent.

Authors from five countries submitted the majority of trials – 48 per cent of Chinese trials, 62 per cent of Indian trials and 90 per cent of Egyptian trials were suspected fakes. A third of South Korean and a fifth of Japanese trials were also possibly bogus.

A similar analysis of more than 5,000 studies published in major journals, including The New England Journal Of Medicine and the Journal Of The American Medical Association, confirmed his early finding, suggesting that 15 per cent could be fraudulent."

A 2023 study on cardiovascular imaging research found that "3.1% of corresponding authors declared having committed scientific fraud in the past 5 years." We can assume the actual percentage of researchers committing fraud is much higher, because it is always true that the number of people who confess to some sin is very much smaller than the number of people who committed such a sin. 


Tuesday, February 4, 2020

When Animals Cast Doubt on Dogmas About Brains

These days our science news sources typically try to get us all excited about many a science study that is not worthy of our attention.  But when a study appears that tells us something important, such a study will receive almost no attention if the study reports something that conflicts with prevailing dogmas about reality.  So some recent not-much-there study involving zapping dead brain tissue got lots of attention in the press, but a far more important neuroscience study received almost no attention.  The more important study was one that showed a rat with almost no brain had normal cognitive and memory capabilities. 

The study was reported in the Scientific Reports sub-journal of the very prestigious journal Nature, and had the title "Life without a brain: Neuroradiological and behavioral evidence of neuroplasticity necessary to sustain brain function in the face of severe hydrocephalus."  The study examined a rat named R222 that had lost almost all of its brain because of a disease caused hydrocephalus, which replaces brain tissue with a watery fluid. The study found that despite the rat having lost almost all of its brain, "Indices of spatial memory and learning across the reported Barnes maze parameters (A) show that R222 (as indicated by the red arrow in the figures) was within the normal range of behavior, compared to the age matched cohort."   In other words, the rat with almost no brain seemed to learn and remember as well as a rat with a full brain. 

This result should not come as any surprise to anyone familiar with the research of the physician John Lorber. Lorber studied many human patients with hydrocephalus, in which healthy brain tissue is gradually replaced by a watery fluid. Lorber's research is described in this interesting scientific paper. A mathematics student with an IQ of 130 and a verbal IQ of 140 was found to have “virtually no brain.” His vision was apparently perfect except for a refraction error, even though he had no visual cortex (the part of the brain involved in sight perception).

In the paper we are told that of about 16 patients Lorber classified as having extreme hydrocephalus (with 90% of the area inside the cranium replaced with spinal fluid), half of them had an IQ of 100 or more. The article mentions 16 patients, but the number with extreme hydrocephalus was actually 60, as this article states, using information from this original source that mentions about 10 percent of a group of 600. So the actual number of these people with tiny brains and above-average intelligence was about 30. The paper states:

"[Lorber] described a woman with an extreme degree of hydrocephalus showing 'virtually no cerebral mantle' who had an IQ of 118, a girl aged 5 who had an IQ of 123 despite extreme hydrocephalus, a 7-year-old boy with gross hydrocephalus and an IQ of 128, another young adult with gross hydrocephalus and a verbal IQ of 144, and a nurse and an English teacher who both led normal lives despite gross hydrocephalus."

Sadly, the authors of the "Life without a brain" paper seemed to have learned too little from the important observational facts they recorded. Referring to part of the brain, they claim that "the hippocampus is needed for memory," even though their rat R222 had no hippocampus that could be detected in a brain scan. They stated, "It was not possible from these images of R222 to identify the caudate/putamen, amygdala, or hippocampus."  Not very convincingly, the authors claimed that rat R222 had a kind of flattened hippocampus, based on some chemical signs (which is rather like guessing that some flattened roadkill was a particular type of animal). 

But how could this rat with almost no brain have performed normally on the memory and cognitive tests?  The authors appeal to a miracle, saying, "This rare case can be viewed as one of nature’s miracles." If you believe that brains are what store memories and cause thinking, you must regard cases such as this (and Lorber's cases) as "miracles," but when a scientist needs to appeal to such a thing, it is a gigantic red flag. Much better if we have a theory of the mind under which such results are what we would expect rather than a miracle.  To get such a theory, we must abandon the unproven and very discredited idea that brains store memories and that brains create minds. 

The Neuroskeptic blogger at Discovery magazine's online site mentions this rat R222, and the case of humans who performed well despite having the vast majority of their brain lost due to disease.  Let's give him credit for mentioning the latter. But we shouldn't applaud his use of a  trick that skeptics constantly employ: always ask for something you think you don't have. 

This "keep moving the goalposts" trick works rather like this. If someone shows a photo looking like a ghost on the edge of a photo, say that it doesn't matter because the ghost isn't in the middle of the photo. If someone then shows you a photo that appears to show a ghost in the middle of the photo, say that it doesn't matter, because the photo isn't a 6-megabyte high resolution photo.  If someone then shows you a 6-megabyte high resolution photo that appears to show a ghost in the middle of the photo, say that it doesn't matter, because it's just a photo and not a movie. If someone then shows you a movie of what looks like a ghost, say that it doesn't matter, because there were not multiple witnesses of the movie being made. If someone then shows you a movie of what looks like a ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the movie isn't a movie-theater-quality 35 millimeter Technicolor Panavision movie.  If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the ghost wasn't levitating. If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a levitating ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter because the ghost wasn't talking.  If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a levitating talking ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the levitating talking ghost didn't explain the meaning of life to your satisfaction. 

The Neuroskeptic uses such a technique when he writes the following:

"In the case of the famous human cases of hydrocephalus, the only evidence we have are the brain scans showing massively abnormal brain anatomy. There has never, to my knowledge, been a detailed post-mortem study of a human case."

If there were such an alleged "shortfall," it would be irrelevant, because we can tell perfectly well from a brain scan the degree of brain tissue loss when someone has lost most of their brain, as happened in the case of Lorber's patients and other hydrocephalus patients.  Complaining about the lack of an autopsy study in such patients is like saying that you don't know that your wife lacks a penis, because no one did an autopsy study on her.  Neuroskeptic's claim of no autopsy studies on hydocephalus patients is incorrect. When I do a Google search for "autopsy of hydrocephalus patient," I quickly find several studies which did such a thing, such as this one which reports that one of 10 patients with massive brain loss due to hydrocephalus was "cognitively unimpaired."  Why did our Neuroskeptic blogger insinuate that such autopsy studies do not exist, when discovering their existence is as easy as checking the weather? 

There are many animal studies (such as those of Karl Lashley) that conflict with prevailing dogmas about the brain. One such dogma is that the cerebral cortex is necessary for mental function.  But some scientists once tried removing the cerebral cortex of newly born cats. The abstract of their paper reports no harmful results:

"The cats ate, drank and groomed themselves adequately. Adequate maternal and female sexual behaviour was observed. They utilized the visual and haptic senses with respect to external space. Two cats were trained to perform visual discrimination in a T-maze. The adequacy of the behaviour of these cats is compared to that of animals with similar lesions made at maturity."

Figure 4 of the full paper clearly shows that one of the cats without a cerebral cortex learned well in the T-maze test, improving from a score of 50 to almost 100. Karl Lashley did innumerable experiments after removing parts of animal's brains. He found that you could typically remove very large parts of an animal's brain without affecting the animal's performance on tests of learning and memory. 

Ignoring such observational realities, our neuroscientists cling to their dogmas, such as the dogma that memories are stored in brains,  and that the brain is the source of human thinking.  Another example of a dubious neuroscience dogma is the claim that the brain uses coding for communication.   A scientific paper discusses how common this claim is:

"A pervasive paradigm in neuroscience is the concept of neural coding (deCharms and Zador 2000): the query 'neural coding' on Google Scholar retrieved about 15,000 papers in the last 10 years. Neural coding is a communication metaphor. An example is the Morse code (Fig. 1A), which was used to transmit texts over telegraph lines: each letter is mapped to a binary sequence (dots and dashes)."

But the idea that the brain uses something like a Morse code to communicate has no real basis in fact.  The paper quoted above almost confesses this by stating the following:

"Technically, it is found that the activity of many neurons varies with stimulus parameter, but also with sensory, behavioral, and cognitive context; neurons are also active in the absence of any particular stimulus. A tight correspondence between stimulus property and neural activity only exists within a highly constrained experimental situation. Thus, neural codes have much less representational power than generally claimed or implied."

That's moving in the right direction, but it would be more forthright and accurate to say that there is zero real evidence that neurons are using any type of code to represent human learned information, and that the whole idea of "neural coding" is just a big piece of wishful thinking where scientists are seeing what they hope to see, like someone looking at a cloud and saying, "That looks like my mother."