Tuesday, March 29, 2022

Why the Academia Cyberspace Profit Complex Keeps Giving Misleading Brain Research Reports

Why do so many untrue and misleading stories about brains and minds appear in the press? The answer is largely a financial one: because various parties profit from such misleading stories. Using the famous "follow the money" slogan in the main movie about the Watergate affair (All the President's Men), let us "follow the money" and see how various parties profit from misleading stories about brains and minds in the press. 

The interesting diagram below illustrates a profit complex that links academia and cyberspace (a word that means the same as the Internet). 

Bad Science Is Profitable

To understand this profit complex, you must first understand how modern scientists are judged by their peers and superiors in academia. There are two numbers by which scientists are judged: (1) the number of scientific articles that the scientist has written or co-written, called the paper count; (2) the number of other papers that have mentioned or cited one of the papers the scientist has written or co-written, called the citation count. If you are a scientist hoping for a promotion such as tenure or a higher salary, you very much want these numbers to be as high as possible. 

The desire to raise such numbers (for the benefit of a scientist) is very much a factor when a scientist designs an experiment. Given the choice between some "quick and dirty" experimental design that will be likely to produce some result that is either a quick and easy result or a positive result or a result that can be claimed as some important result, and some other design that involves some more stringent research method that is longer, harder, or less likely to result in a positive result or a result that can be claimed as important, a scientist who is very interested in increasing his paper count and his citation count will be more likely to choose the "quick and dirty" design.  Such "quick and dirty" designs will very often involve way-too-small sample sizes, in which fewer than 15 subjects are studied (often for studies in which many dozens, hundreds or thousands of subjects would be needed if you wanted to get a reliable result).  A scientific study found that research papers that failed to replicate were on average 153 times more likely to be cited than papers describing research that replicated, stating this: "papers that replicate are cited 153 times less, on average, than papers that do not." Such failing-to-replicate studies typically involve shoddy "quick and dirty" experimental designs. 

Nowadays science journals have a tendency called "publication bias," which is a tendency to publish papers reporting positive results and reject papers reporting null or negative results.  When a scientist does an experiment that produces a null or negative result, and is not able to get a journal to publish the paper, the scientist's paper count is not increased, and the effort does nothing to advance the scientist's career. So scientists will avoid very careful and stringent designs less likely to result in a paper reporting a positive result, and will be more likely to create "quick and dirty" designs more likely to result in a positive result and more likely to result in a positive result that can be produced more quickly.  The quicker the experiment can be done, the more quickly can the scientist's paper count be increased. 

After creating this design, some observations are produced. Desiring to report some positive result and ideally some important-sounding result, scientists will tend to filter or segregate the observations to produce some subset that is more favorable to the reporting of a positive and interesting-sounding result. Sometimes this process can be described as cherry-picking, and other times this process is something rather along the lines of "keep slicing and dicing the data until it gives what is wanted" or "keep torturing the data until it confesses."  There are 101 reasons that can be given for excluding some data points and keeping other data points. There are also hundreds of statistical methods that can be used to massage and filter data until you are left with more favorable results.  In this analysis of data, scientists will have a motivation not to use blind analysis techniques that minimize the chance of biased analysis in which scientists report seeing what they want to see. 

After such analysis is completed, there comes the writing of a scientific paper.  When writing up a scientific paper, scientists are very motivated to describe the research as showing some positive result, even if the research has mainly or entirely produced a negative or null result. This is because scientists want to increase their paper count (the number of published papers they have authored or co-authored); and given publication bias in which journals tend to reject papers reporting only negative results, a paper reporting a negative result may be unlikely to be published. Scientists will also be very motivated to report getting some important result. The more that a scientist tends to claim that some important result was produced by the research, the more likely will be the publication of the paper. Also, the more important the result that is claimed, the more likely the paper will be to be cited by other papers. Such citations are extremely important to scientists, as scientists are judged not just by their paper count (the number of papers they have written), but also by their citation count (the number of times such papers have been cited). 

It very often happens that in writing up papers describing their research, scientists make claims that are misleading, exaggerated or just plain false. At a blog entitled "Survival Blog for Scientists" and subtitled "How to Become a Leading Scientist," a blog that tells us  "contributors are scientists in various stages of their career," we have an explanation of why so many science papers have inaccurate titles:

"Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals,  you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content."

scientist citation counts
Is this how scientists are trained?

A neuroscientist makes this confession:

"This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce 'better' results, and sometimes even commit fraud in order to impress those all-important gatekeepers."

After a scientific paper has been written up and published, it is announced with a press release issued by the main academic institution involved in the research. Nowadays the press releases of universities and colleges are notorious for making sensationalized claims that are not warranted by anything discovered in the research being discovered. Often a tentative claim made in a scientific paper (basically a "perhaps" or a "maybe") will be stated as if it is was simply a discovery of a definite fact.  Other times a university press release will make some important-sounding claim that was never made in the scientific paper writing up the research.  An example was that when  there appeared a scientific paper merely claiming that "Regional synapse gain and loss accompany memory formation in larval zebrafish," there appeared a great number of press stories repeating the headline of a press release claiming that the formation of a memory had been observed (a claim not made in the paper).  We have every reason to believe that synapse gains and losses occur  continually in the human body, regardless of whether some new memory is forming. 

Authorship anonymity is a large factor that facilitates the appearance of misleading university and college press releases.  Nowadays university and college press releases typically appear without any person listed as the author. So when a lie occurs (as it very often does), you can never point the figure and identify one particular person who was lying.  When PR men at universities are thinking to themselves "no one will blame me specifically if the press release has an error," they will feel more free to say misleading and untrue things that make unimpressive research sound important.  We should always hold every single scientist involved in a scientific paper responsible and accountable for every untruth that appears in a scientific paper they co-authored and also ever untruth that appears in the university press release announcing the paper, unless that scientist has publicly protested the misstatement. 
 
Misleading press releases produce an indirect financial benefit for the colleges and universities that release them.  When there occurs untrue announcements of important research results, such press releases make the college and university sound like some place where important research is being done. The more such press releases appear, the more people will think that the college or university is worth the very high tuition fees it charges. 

Judging from the quote below, it seems that science journalists often look down on the writers of university and college press releases, even though such science journalists very often uncritically parrot the claims of such people.  In an Undark.org article we read this:

"Still, there are young science journalists who say they would rather be poor than write a press release. Kristin Hugo, for example, a 26-year-old graduate of Boston University’s science journalism program, refuses to step into a communications role with an institution, nonprofit or government agency.  'I’ve been lucky enough that I haven’t had to compromise my integrity. I really believe in being non-biased and non-partisan,' she says. 'I really, really, really want to continue that. I wouldn’t necessarily begrudge someone for going into [public relations] because there’s money in that, but I’d really like to stay out of it.' "

Misleading press releases also help to sustain cyberspace profit systems outside of a college or university. Such press releases are repeated (often with further exaggerations and misstatements) by a host of web sites offering clickbait headlines leading to web pages containing ads. The more people click on these clickbait headlines, the more page views there are for pages containing ads. The more people view those pages, the more advertising revenue the web sites get. 

So web sites giving science news stories have a very large financial incentive to produce exaggerated or untrue headlines that users will be more likely to click on.  If the headline on a web page truthfully says, "Another Junk Science Brain-Scanning Result," almost no one will click on the headline to go to the page with the story containing ads. But if the headline untruthfully says, "Breakthrough Study Reveals the Secret of Memory," then thousands of people may click on the headline, producing many pages views of the story the link leads to, and much more advertising revenue. 

The web sites are one profit center benefiting from poor and misleading science journalism that exaggerates or misrepresents unimpressive research. Another profit center is the science journalists themselves. Most science journalist do not work on some salary basis in which they are paid the same annual salary regardless of what they write. Instead most science journalists work on a per-article basis, earning about $1 per word for an article in a print magazine such as Discover Magazine, or about $300 per article for an online article. Such journalists tend to pitch their stories to editors. The more sensational sounding the story, and the more exciting the claims made, the more likely the story will be to get published.  An article that applies critical scrutiny to some impressive-sounding press release claim will be unlikely to be published.  By uncritically parroting unfounded but exciting-sounding claims in university and college press releases,  science journalists help to fatten their own wallets.  Often science journalists will imaginatively add their own unwarranted claims and unjustified spin about some research, hoping to further increase their chances of receiving fees for writing exciting-sounding news stories.  In general, science journalists getting paid by word or by article are often very unreliable sources of information.  

To "follow the money" all the way, we must go back to the scientists who originally chose "quick and dirty" designs, and who may have misstated the implications and findings coming from their research. What is the result when "quick and dirty" experiment designs are chosen? The result is that the paper count (the number of published papers) of a scientist will increase more quickly. What is the result when scientists misstate or exaggerate what their observations show or imply, making their research sound important when it is not? The result is a greater number of citations of their papers by other scientists. The very important "citation count" of a scientist will increase.  What is the financial result when a scientist has piled up a high paper count and a high citation count? That scientist will be more likely to get promoted, more likely to get the tenure that gives him a lifetime job, more likely to get a higher salary, more likely to get a lucrative book deal with a major publisher, and so forth. 

What we have is an infrastructure that all over the place incentivizes bad agents who mislead and misinform, as long as such persons mislead and misinform in some way that produces exciting-sounding results that fit in with popular academia belief systems.  Given such an infrastructure, you should not be surprised to hear that today's cognitive neuroscience is a house of cards that mostly rests on an illusory foundation. Most of the things that neuroscientists claim have been established by cognitive neuroscientists have not actually been established by them at all. Most of the more important-sounding claims made in the neuroscience news stories of recent years are claims lacking any solid foundation in observations. Junk science flourishes, because there are so many people in so many different places who profit from junk science. 

No comments:

Post a Comment