We cannot doubt that certain types of scientists are very hard-working people. Consider, for example, a type of scientist who specializes in some particular type of wild animal. Studying that animal may require a great deal of laborious field work outdoors, perhaps spending many days in a tent. Such scientists are probably pretty hard workers. But what about neuroscientists? Should we regard them in general as being extremely hard-working people? Maybe not.
The considerations below refer only to neuroscientists who are not practicing neurologists or neurosurgeons (a group of people who probably work very hard treating sick patients). There are quite a few reasons for suspecting that neuroscientists may not be terribly hard-working people. One reason is that the PhD theses of neuroscientists tend to be some of the shortest PhD theses. On the page here, we see an interesting visual made by Marcus Beck using all of the PhD dissertations produced at the University of Minnesota. The visual shows that neuroscience PhD dissertations are some of the shortest of any academic subject, averaging only a little more than 100 pages. For comparison, the same visual shows that history PhD dissertations average more than 300 pages, and that English and sociology dissertations average about 200 pages.
Another reason for suspecting that neuroscientists may not be working extremely hard is the continued failure of neuroscientists to follow sound experimental procedures. Again and again in neuroscientist studies we find the same old Questionable Research Practices. Acting in a sloppy and lazy manner, neuroscientists typically fail to devise a detailed research plan and commit themselves to following it, before they gather data. They gather data and may then fool around with dozens of different ways of analyzing the data until they get something they can report as "statistically significant." Such scientists have been told endless times that failing to commit themselves to testing a specific hypothesis in a specific way results in unreliable research that is probably picking up mainly "noise" rather than something important; but they keep ignoring such warnings.
Following sound research practices requires very hard-working people who act with discipline. A "let's just gather data and then wing it to write the paper" approach is the kind of approach that may be preferred by rather lazy people. Gathering data from hundreds of subjects requires lots of work. But neuroscientists are infamous for using tiny study group sizes in their experiments. The typical neuroscience experiment involves gathering data from fewer than 15 human or animal subjects, often less than 10. That does not require very much work. Referring to neuroscience brain scan studies, the 2020 paper here tells us that "96% of highly cited experimental fMRI studies had a single group of participants and these studies had median sample size of 12."
It has often been pointed out to neuroscientists that they are typically using study group sizes way too small for reliable results. For example, the 2017 paper co-authored by the well-known statistician John P. A. Ioannidis ("Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature") stated that " because sample sizes have remained small" it was sadly true that "false report probability is likely to exceed 50% for the whole literature." But there is no evidence that neuroscientists have substantially changed their ways to reduce such criticisms. It kind of reminds me of a teenager who keeps loafing on the couch one Saturday afternoon despite long being scolded by his parents to do something useful.
There are other reasons for thinking today's typical neuroscientist simply isn't working terribly hard. One reason is related to the relatively low research output of today's neuroscientists. There is a rough method that can be used to calculate how productive scientists in a particular field are:
(1) Get an estimate of the total number of scientists of a particular type in the United States.
(2) Get an estimate of how many papers are published by scientists of that type in the United States.
(3) After getting a preliminary "papers published per scientist" estimate, remember to divide by the average number of authors per paper for papers published by scientists of that type.
Let's try doing a rough calculation of this type.
(1) It is estimated that there are about 25,000 neuroscientists in the United States.
(2) In the United States in 2020 there were published about 600,000 scientific papers, and no more than about 25,000 of these papers are neuroscience papers. That gives you only about one neuroscience paper published per neuroscientist in the United States.
(3) In the field of neuroscience, there is a surprisingly high number of average authors per neuroscience paper. According to the paper "A Century Plus of Exponential Authorship Inflation in Neuroscience and Psychology," there has been an astonishing rise in the average number of authors per neuroscience paper. The paper says, "The average authorship size per paper has grown exponentially in neuroscience and psychology, at a rate of 50% and 31% over the last decade, reaching a record 10.4 and 4.8 authors in 2021, respectively." That gives you a current average of about 10 authors per neuroscience paper.
So how much research work is the average neuroscientist doing per year? It seems reasonable to estimate that the amount of work done by one of about 10 paper authors would be equal to about one tenth the work needed to write the paper alone. So given only about one neuroscience paper published per neuroscientist in the United States, and an average of about 10 authors per paper, we are left with a very rough estimate that each year the average neuroscientist is doing only about one tenth of the research work needed to write a paper. Because the neuroscience paper (with an average of ten authors per paper) is about 30 pages long, we are left with the suspicion that neuroscientists do shockingly little work writing up scientific papers. Perhaps their published work is so little it amounts to only about three to ten pages per year of writing. But who knows; my estimates here are very rough. I may note, however, that a large fraction of neuroscience papers these days consists of brain scan visuals or charts produced by software, neither of which require much writing activity.
I may note that many neuroscientists and other scientists engage in the appalling dishonest practice of describing themselves as the authors of a certain number of papers, when they are mainly just co-authors of such papers, which typically had many authors. So if a neuroscientist was always a co-author, and a co-author of 100 scientific papers that had about 1000 different authors, he may dishonestly describe him as "the author of 100 scientific papers." Then there is the issue that many neuroscience papers having some relation to drugs or medical devices are largely written by corporate-paid ghostwriters, paid to tell some story that promotes a corporate agenda. Based on the information in this article, reporting that at least 25% of the New York Times nonfiction bestseller list is written by ghostwriters, we may assume that many of the books of neuroscientists are largely written by ghostwriters. The page here tells us that "nearly all experts and celebrities use ghostwriters."
The 2017 paper "Effect size and statistical power in the rodent fear conditioning literature – A systematic review" inadvertently amounts to an astonishing portrait of neuroscientist laziness. One of the most basic things that a good experimental scientist should do is something called a sample size calculation, to determine the number of subjects needed in an experiment for the experiment to have a certain amount of statistical power such as 80% power. The paper reviewed 410 neuroscience experiments, and found that "only one article reported a sample size calculation." The average sample size reported in Figure 3 of the paper was only about 10 animals per experiment. The paper reports that "our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments."
Similarly, the paper here on brain scan studies tells us that only 3% of papers that it examined had calculations of statistical power. That sounds like some very bad laziness going on.
In the journal Science we read a story entitled "Fake Science Papers Are Alarmingly Common" saying the following:
"When neuropsychologist Bernard Sabel put his new fake-paper detector to work, he was 'shocked' by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%....His findings underscores what was widely suspected: Journals are awash in a rising tide of scientific manuscripts from paper mills -- secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship."
Referring to "red-flagged fake publications" as RFPs, a paper by Sabel ("Fake Publications in Biomedical Science: Red-flagging Method Indicates Mass Production") and three other authors (including 2 PhDs) states this:
"The results show a rapid growth of RFPs [red-flagged fake publications] over time in neuroscience (13.4% to 33.7%) and a somewhat smaller and more recent increase in medicine (19.4% to 24%) (Fig. 2). A cause of the greater rise of neuroscience RFPs may be that fake experiments (biochemistry, in vitro and in vivo animal studies) in basic science are easier to generate because they do not require clinical trial ethics approval by regulatory authorities."
Later we read this:
"Study 4 tested our indicators in an even larger sample of randomly selected journals included in the Neuroscience Peer Review Consortium. It redflagged 366/3,500 (10.5%) potential fakes."
Doing accurate complex math can be hard work, but the mathematics in neuroscience papers is typically pretty simple compared to the very complex math of theoretical physics papers. Moreover, the math that appears in neuroscience papers is often wrong. A common type of calculation is a calculation of an effect size. Using the word "ubiquitous" to mean "everywhere," and using the word "inflated" to mean "mathematically incorrect," a scientific paper tells us, "In smaller samples, typical for brain-wide association studies (BWAS), irreproducible, inflated effect sizes were ubiquitous, no matter the method (univariate, multivariate)." A paper on fMRI neuroscience research tell us this:
"Almost every fMRI analysis involves thousands of simultaneous significance tests on discrete voxels in collected brain volumes. As a result, setting one’s P-value threshold to 0.05, as is typically done in the behavioral sciences, is sure to produce hundreds or thousands of false positives in every analysis."
There are fields that require constant studying to keep up with the latest advances. In the decades I worked as a computer programmer, I had to learn new technologies over and over again as the programming world moved (between 1990 to 2010) from command-line DOS boxes to object-oriented programming to Windows graphical user interfaces to web-based programming, and from simple flat-file text data storage and spreadsheet data storage to relational databases and cloud data storage. But what changes have there been in neuroscience in the past 30 years? These guys are still trying to get by on fMRI scans and EEG readings that they've been using for more than 30 years. And there's been no very substantive advance in the theories of neuroscientists in decades.
There are other reasons for suspecting that neuroscientists may tend to be people who don't work terribly hard. Again and again, neuroscientists act rather like people who had failed to diligently study brains in a comprehensive manner, and failed to do the work we would expect serious scholars of brains to have done. Again and again neuroscientists repeat "old wives tales" of neuroscience communities that are inconsistent with facts about brains that scientists have learned. An example is their repeated recitation of a claim that brain signals travel at 100 meters per second, instead of telling us the facts that imply that brain signals probably travel at an average speed more than 1000 times slower than this while traveling through the cortex. One of the most important things ever discovered in neuroscience is that brain signals travel across synapses very unreliably, with a reliability of only about 50% or less. Neuroscientists fail to realize the enormous implications of this fact, and typically speak just as if they had never learned such a fact. When they speculate about how a brain could store memories, making vague hand-waving claims about synapses, neuroscientists sound like people who had not well-studied the high instability and high molecular turnover of synapses and the dendritic spines they are attached to. Neuroscientists typically write like people who had failed to seriously study the most important cases of high mental performance despite heavy brain damage (discussed here and here and here). When they write about human memory performance, neuroscientists typically write like people who had not well-studied the topic, and who didn't know anything about exceptional human memory performance (discussed here and here). Neuroscientists are always dogmatically lecturing us about what they think are the causes of mental phenomena, but the great majority of neuroscientists show no signs of being very diligent scholars of human mental phenomena in all its diversity. Again and again in their literature I read statements about human minds and human mental performance that simply are not true, such as the very silly claim (contrary to all human experience) that it takes quite a few minutes for someone to create a new memory. The topic of human mental experiences and human mental abilities and mental states is a topic of oceanic depth. I repeatedly read writings from neuroscientists who sound like they have merely waded around at the edges of such an ocean, rather than very often making very long and very deep dives into it.
I wonder: is it a kind of "dirty little secret" of modern science that after getting your master's degree you can get to be a neuroscientist PhD without doing terribly much work, and that you can then kind of coast your way or cruise your way through very much or most of your career, typically not exerting yourself all that much? Is the field of neuroscience these days somewhat like a haven for sloppy workers or semi-slackers who get used to just "phoning it in"?
But taking care of patients is very hard work, so none of the above applies to neuroscientists heavily involved in clinical care.
No comments:
Post a Comment