Tuesday, December 22, 2020

Neuroscience Research Customs Guarantee an Abundance of Junk Science

There is a very great deal of junk science published by neuroscientists, along with much research that is sound.  That so much junk science would appear is not surprising at all, given the research customs that prevail among neuroscientists. 

Let us look at a hypothetical example of the type of junk science that so often appears. Let us imagine a scientist named Jack who wishes to show that a particular protein in the brain (let's call it the XYZ protein) is essential for memory.  We can imagine Jack doing a series of experiments, each one taking one week of his time. 

Jack thinks up a simple design for this experiment. Some mice will be genetically engineered so that they do not have the XYZ protein. Then the mice will be given a memory test. First, the mice will be placed in a cage, with a shock plate between the mouse and the cheese. When the mouse walks over the shock plate to go directly to the cheese, the mouse will be shocked. Later the mouse will be placed in the cage again. It will be recorded whether the mouse takes an indirect path  to get the cheese (as if it remembered the previous shock it got on the shock plate), or whether the mouse just goes directly to the cheese (as if it did not remember the previous shock it got on the shock plate). The visual below shows the experiment:


memory test

Now, let us imagine that on Week 1 Jack does this experiment with 6 mice, and finds no difference between the behavior of the mice that had the XYZ protein, and those who do not. Jack may then write up these results as "Experiment #1," file the results in a folder marked "Experiment #1", and keep testing until he gets the results he is looking for. 

Jack  may then get some results such as the following:

Week 2: Mice without XYZ protein behave like those with it. Results filed as Experiment #2. 
Week 3: Mice without XYZ protein behave like those with it. Results filed as Experiment #3. 
Week 4: Mice without XYZ protein behave like those with it. Results filed as Experiment #4. 
Week 5: Mice without XYZ protein behave like those with it. Results filed as Experiment #5. 
Week 6: Mice without XYZ protein behave like those with it. Results filed as Experiment #6. 
Week 7: Mice without XYZ protein behave like those with it. Results filed as Experiment #7. 
Week 8: Mice without XYZ protein behave like those with it. Results filed as Experiment #8. 
Week 9: Mice without XYZ protein behave like those with it. Results filed as Experiment #9. 
Week 10: Mice without XYZ protein behave like those with it. Results filed as Experiment #10. 
Week 11: Mice without XYZ protein behave like those with it. Results filed as Experiment #11. 
Week 12: Mice without XYZ protein behave like those with it. Results filed as Experiment #12. 
Week 13: Mice without XYZ protein behave like those with it. Results filed as Experiment #13. 
Week 14: Mice without XYZ protein behave like those with it. Results filed as Experiment #14. 
Week 15: Mice without XYZ protein behave like those with it. Results filed as Experiment #15. 
Week 16: Mice without XYZ protein behave like those with it. Results filed as Experiment #16. 

Then finally on Week 17, Jack may get the experimental result he was hoping for. On this week it may be that 5 out of 6 mice with the XYZ protein avoided the shock plate as if they were remembering well, but only 3 out of 6 mice without the XYZ protein avoided the shock plate as if they were remembering well.  Is this evidence that the XYZ protein is needed for memory, or that removing it hurts memory? The result on Week 17 is no such thing.  This is because Jack would expect to get such a result by chance, given his 17 weeks of experimentation. 

We can use a binomial probability calculator (like the one at the Stat Trek site)  to compute the probability of getting by chance 5 (or 6) out of 6 mice avoiding the shock plate, under the assumption that there was always 1 chance in 2 that a mouse would avoid the shock plate. The calculator tells us the chance of this is about 10 percent per experiment:

binomial probability calculation

Since Jack has done this experiment 17 times, and since the chance of getting 5 out of 6 mice avoiding the shock plate by chance is about 10 percent, Jack should expect that at least one of these experiments would give the results he has got, even if the XYZ protein has nothing at all to do with memory.  

But there is nothing in the research customs of neuroscience to prevent Jack from doing something that will give readers a wrong impression.  Instead of doing a paper writing up all 17 weeks of his experimentation, Jack can produce a paper that writes up only week 17 of his research.  The paper can then have a title such as "Memory is weakened when the XYZ protein is removed."   We can imagine research standards that would prevent so misleading a paper, but such standards are not in place.  Discussing only Week 17 of his research, Jack can claim to have reached a "statistically significant" result providing evidence that the XYZ protein plays a role in memory. 

Two other customs aid very much accumulation of junk science:
(1) It is not customary in scientific papers to report the exact dates when data was collected.  This makes it much harder to track down any cases in which an experimenter reports an experimental success during one data-gathering session, and fails to report failures in such experiments during other data gathering sessions. 
(2) It is not customary in scientific papers to report the person who made a particular measurement or produced a particular statistical analysis or a particular graph. So we have no idea of how many hard-to-do-right scientific measurements (using very fancy equipment) were done by scientists-in-training (typically unnamed as paper authors) or by novice scientists who may have committed errors.  And we have no idea of how many hard-to-do-right data analysis graphs were done by scientists-in-training or by novice scientists who may have committed errors.  

Instead of such customs, it is a custom to always vaguely use a passive voice in experimenatal papers. So instead of a paper saying something like, "William Smith measured the XYZ protein levels in the five mice on January 3, 2020," our neuroscience papers are filled with statements such as "XYZ protein levels were measured in five mice" that fail to mention the person doing the measurement or when the measurement was done. 

What kind of research customs would help prevent us from being misled by experimental papers so often? We can imagine some customs:

(1) There might be a custom for every research scientist to keep an online log of his research activities.  Such a log would not only report what the scientist found on each day, but also what the scientist was looking for on any particular day. So whenever a scientist reported some experimental effect observed only on week 27 of a particular year, we could look at his log, and see whether he had unsuccessfully looked for such an effect in experiments on the five preceding weeks.  Daily log reports would be made through some online software that did not allow the editing of reports on days after the report was submitted. 
(2) There might be a custom that whenever a scientist reported some effect in a paper, he would be expected to fully report on each and every relevant experiment he had previously done that failed to find such an effect. So, for example, it would be a customary obligation for a scientist to make reports such as this whenver there were previous failures: "I may note that while this paper reports a statistically significant effect observed in data collected between June 1 and June 7, 2020, the same experimenter tried similar experiments on five previous weeks and did not find statistically significant effects during those weeks."
(3) It would be the custom to always report in a scientific paper the exact date when data was collected, so that the claims in scientific papers could be cross-checked with the online activity logs of research scientists. 
(4) It would be the custom to always report in a scientific paper the exact person who made any measurement, and always report the exact person who made any statistical analysis or produced any graph, so that people could find cases when hard-to-do-right measurement and hard-to-get-right analysis was done by scientists-in-training and novice scientists. 
(5) It would be a custom for studies to pre-register a hypothesis, a research plan and a data analysis plan, before any data was collected, which would help prevent scientists from being free to "slice and dice" data 100 different ways, looking for some "statistically significant" effect in twenty different places, a type of method that has a high chance of producing false alarms. 
(6) It would be a custom for any scientific paper to quote the pre-registration statement that had been published online before any data was collected, so that people could compare such a statement with how the paper collected and analyzed data, and whether the effect reported matched the hypothesis that was supposed to be tested. 
(7) Whenever any type of complex or subtle measurment was done, it would be a custom for a paper to tell us exactly what equipment was used, and exactly where the measurement was made (such as the electron microscope in Room 237 of the Jenkins Building of the Carter Science Center).  This would allow identifications of measurements made through old or "bleeding edge" or poorly performing or unreliable equipment. 
(8) Government funding for experimental neuroscience research would be solely or almost entirely given to pre-registered "guaranteed publication" studies, that would be guaranteed journal publication regardless of whether they produced null results, which would reduce the current "publication bias" effect by which null results are typically excluded from publication. 
(9) Government funding would be denied to experimental neuroscience research that failed to meet standards for minimum study group sizes, greatly reducing all the "way-too-small-sample-size" studies. Journals would either deny publication to such  "way-too-small-sample-size" studies or prominently flag them when they used such way-too-small study groups. 

No such customs exist. Instead we have poor neuroscience research customs that guarantee an abundant yearly supply of shoddy papers. 

Postscript: My discussion above is largely a discussion of what is called a file-drawer effect, in which wrong ideas arise because of a publication bias in which scientists write up only experiments that seem to show signs of an effect, leaving in their file drawers experiments that did not find such an effect. A paper discusses how this file drawer effect can lead to false beliefs among scientists:

"Many of these concerns stem from a troublesome publication bias in which papers that reject the null hypothesis are accepted for publication at a much higher rate than those that do not. Demonstrating this effect, Sterling analyzed 362 papers published in major psychology journals between 1955 and 1956, noting that 97.3% of papers that used NHST rejected the null hypothesis.  The high publication rates for papers that reject the null hypothesis contributes to a file drawer effect in which papers that fail to reject the null go unpublished because they are not written up, written up but not submitted, or submitted and rejected. Publication bias and the file drawer effect combine to propagate the dissemination and maintenance of false knowledge: through the file drawer effect, correct findings of no effect are unpublished and hidden from view; and through publication bias, a single incorrect chance finding (a 1:20 chance at α = .05, if the null hypothesis is true) can be published and become part of a discipline's wrong knowledge."

We can see how this may come into play with neuroscience research. For example, 19 out of 20 experiments may show no evidence of any increased brain activity during some act such as recall, thinking or recognition. Because of the file drawer effect and publication bias, we may learn only of the one study out of twenty that seemed to show such an effect, because of some weak correlation we would expect to get by chance in one out  of twenty studies. 

A web site describing the reproducibiity crisis in science mentions a person who was told of a neuroscience lab  "where the standard operating mode was to run a permutation analysis by iteratively excluding data points to find the most significant result," and quotes that person saying that there was little difference between such an approach and just making up data out of thin air. 

Tuesday, December 15, 2020

They Kept Torturing the Data Until It Confessed to "Time Cells"

Behold the power of the modern neuroscientist. Like a magician making you believe in something that did not really happen, a neuroscientist can make you believe in something that's not really there. Part of the trick is to just use loaded language to describe particular cells. So if a neuroscientist wants you to believe that some cells store memories, he can just start calling any arbitrary cells he has selected "engram cells." And if a neuroscientist wants you to believe that some cells have something to do with time-related episodic memories, he can just arbitrarily pick some cells and start calling such cells "time cells." And if the neuroscientist wants to suggest that some cells store information about some place, he can just arbitrarily pick some cells and start calling such cells "place cells."  There are no generally agreed upon standards for identifying some cell as an engram cell or a "time cell" or a "place cell." A neuroscientist can make up any criteria he wishes for identifying some cell as an engram cell or a "time cell" or a "place cell."

Let us look at one of the studies claiming to supply evidence for so-called "time cells." The study is entitled, "Time cells in the human hippocampus and entorhinal cortex support episodic memory." At the very beginning of the paper we have a definition of time cells designed to make sure they will be found: "Time cells are neurons in the hippocampus and entorhinal cortex that fire at specific moments within a cognitive task or experience." It is well known that neurons are constantly firing. The page here (entitled "Neuron Firing Rates in Humans") states, "we expect average firing rates across the brain to be around 0.29 per second," meaning an average neuron would fire several times each second.  So if you have defined time cells as cells that "fire at specific moments within a cognitive task or experience," of course you will be able to find such cells, since neurons are constantly firing.  But in the next paragraph the text describes time cells as cells that "encode temporal information."  Of course, that's an entirely different definition.  The switch in definition does not inspire our confidence. 

The scientists describe below just a little bit of their convoluted and wildly unnatural method for trying to detect time cells:

"To identify time cells, we looked for an interaction between time and firing rate using a nonparametric ANOVA across time bins (Kruskal–Wallis test) after generating session-wide firing rate tuning curves with Gaussian convolution of the spike trains. Significance testing incorporated a permutation procedure, in which we repeated the ANOVA 1,000 times after circularly shuffling the original tuning curve."

This is not even a full description of the convoluted method the scientists have used to try to gin up some evidence for time cells. When you read the supplementary information of the paper, you will read about many other procedural twists and turns of their Byzantine method. For example, we read this:

"We first down sampled the spike train by a factor of 32 or 30, depending on the original sampling rate. We then compared the fits of two models that describe the likelihood of spiking activity at any given sample along the length of the encoding list (for encoding time cells) or retrieval list (for retrieval time cells)....A time field model, specified by a total of four parameters, included a Gaussian field of increased firing probability located somewhere along the length of the encoding list...The former was bound between 0 and 1, so that the mean of the field was located within the encoding list. To prevent excessively large Gaussian fields appearing as a flat line across the list, the standard deviation was bound at 1/6....We used matlab’s particleswarm with fmincon as a hybrid function to minimize the negative log-likelihood of these models to solve for their parameters.... We fit the model to data from all lists, only odd lists, and only even lists to avoid a single list driving the effect."

Reading about this labyrinthine methodology,  I'm reminded of a saying commonly stated among experimenters: if you torture the data long enough, it will confess to anything. 

Bad science
Neuroscience experiments often go rather like this

The visual evidence the authors present as evidence for "time cells" are some "spike heat maps," not anything coming directly from any scientific instrument, but some visuals resulting from some convoluted arbitrary fiddling with the data.  Such "spike heat maps" don't look impressive at all, and look like what we would get from random data. 

There are two things we would like to see in a study such as this, in order to have any faith that it has actually discovered any evidence of cells that "encode temporal information":

(1) Evidence of pre-registration.  To have some faith that the scientists were not just playing around with data analysis until they found some faint effect they could call evidence of what they wanted to see, we would like to see the paper tell us that the study was a pre-registered study in which the scientists tested only a very specific hypothesis they had previously publicly committed themselves to testing (before collecting any data), using one and only one method of data gathering and data analysis they had previously publicly committed themselves to (before collecting any data).  We can assume the study was not pre-registered, since no claim is made of such a thing. 
(2) Evidence of blinding.  For a study like this to be credible, we would need to see a description of how an exact blinding protocol was followed, to reduce bias in data gathering and data analysis. No mention is made of any blinding protocol. 

The study provides no robust evidence at all that there are cells that "encode temporal information."

Monday, December 7, 2020

Common Experiences That Show the Untruth of Professorial Memory Claims

Scientists have an "ivory tower" dogma about memory formation that is contradicted by much of human experience. The dogma is that you cannot instantly form a long-term memory.  The reason why scientists believe this very silly notion has to do with their groundless theory that memories are stored in brains through the strengthening of synapses. The strengthening of synapses would require the synthesis of new proteins, which requires minutes. 

So a neuroscientist will typically claim that you can't instantly acquire a permanent long-term memory, an idea that is sometimes called the theory of consolidation. They will claim that if you learn something just once, or see it just once, it will only exist in short-term memory, and quickly fade away. They will claim that repeated exposures are required of some thing to be learned, and that over the length of these repeated exposures, there will be time for synapses to be strengthened through protein synthesis. 

A statement of this theory of memory consolidation appeared recently in an MIT press release, making claims I debunk here. In the press release we read this:

“'The formation and preservation of memory is a very delicate and coordinated event that spreads over hours and days, and might be even months — we don’t know for sure,' Marco says. 'During this process, there are a few waves of gene expression and protein synthesis that make the connections between the neurons stronger and faster.' ”

Such a dogma is described like this in a scientific paper, which attributes to "common sense" something that is actually contrary to common sense and common experience:

"Common sense believes that long-term memory (LTM) is difficult to form for it requires repeated efforts for acquiring. The consolidation theory suggests that LTM needs hours to convert labile memory to LTMThis process requires the synthesis of new proteins that supports long-lasting changes in synaptic morphology." 

if brains stored memories

I could cite some animal studies that contradict such claims. But there would be no point in doing so, for we do not need any peer-reviewed scientific studies to prove that humans commonly form long-term new memories instantly. There is an abundance of very common human experience that can be cited to prove that humans routinely form new long-term memories instantly. 

Let us consider the simple case of a diary writer who has the habit of writing down in his diary before he goes to bed the events of the day.  Such a diary writer will have no trouble recalling all the important events that happened during the day, even events that he has not reviewed in his mind after they happened. Under the classification system used by psychologists, memories of things that happened twelve hours ago are long-term memory, and only memories of events within the past few minutes are short-term memory. 

Consider also the case of a store owner. He sees a customer he may have seen a few days ago, only briefly, for the first time. Upon seeing the customer the second time a few days or weeks later, he may say, "Good to see you again." Such a store owner has formed a long-term memory after only a single brief encounter with a person. 

It is a fact of human experience that humans can form long-term memories after listening to a teacher describe a historical incident only one time.  For example, after you heard your history teacher describe for the first time the assassination of Abraham Lincoln or Julius Caesar or John Kennedy, you probably remembered such stories long enough to pass a test in that class a few days or weeks later; and there is a good chance you remembered such accounts for years.  It was not necessary for the teacher to tell the stories two or three times for you to form a long-term memory of them. 

An extremely common example of the instant formation of long-term memories is how we remember movies and TV shows we have already seen. Imagine you see a particular movie on television for the first time.  If you were paying attention while watching, you will instantly form a long-term memory of the events in that movie.  Then a few months or years later you may see the movie showing again on TV, which frequently has repeat showings of movies and TV shows.  What will you typically do when you find the TV showing that old movie you saw a few months or years ago? If you particularly enjoyed the movie, you may want to watch it again. But more commonly, you will just change the channel, thinking to yourself, "I've already seen that one."

Why do you do that? It's because you remember the story of the movie, after having seen it only one time. So you change the channel, because you want to see some fresh never-before-seen story rather than some story you remember.  This would never happen if the dogmas of neuroscientists were true. If it required minutes for you to synthesize a protein to strengthen synapses in order to form a memory, you would never be able to remember things in a movie at the speed at which a movie or TV show is displayed. 

You can do a test to refute claims that it takes many minutes for you to form long-term memories. Five minutes after you finish watching a movie (which is longer than the maximum retention time attributed to short-term memory), ask yourself to orally tell what happened in the movie. You will be able to recount the whole story if you paid attention while watching it.  There will not at all be any "catch-up" effect in which you start to remember the movie's story better a half an hour or hours later after your synapses and proteins have caught up in their storage work. This is because you aren't actually storing memories through protein synthesis or synapse strengthening.  There is no real evidence for a brain storage of memories. You remember things just as if your brain had no involvement, and you can instantly form a permanent new memory when something important happens.  When someone slaps you hard on your face, you will instantly form a vivid permanent new memory.  You do not require repeated slaps for you to remember being slapped. 

brain storage of memories

Brain proteins have an average lifetime of less than two weeks. Your brain replaces its proteins at a rate of about 3% per day, and you wouldn't remember things more than a month or two if your brain was storing memories. But people like me have very good memories of trivial things they saw 50 years ago. The other day I was watching an episode of the old "Columbo" TV series from the early 1970's. I correctly identified the full names of an actor and actress that were guest stars, both people I hadn't seen on TV in many years. Then I saw the face of a very little-known actor I haven't seen anywhere in almost 50 years. Very quickly I correctly identified that his first name was John, and that he had played a role in a short-lived TV show canceled in 1971. I quickly remembered the full names of two of the actresses in that short-lived TV show never shown after its cancellation in 1971, and also the name of the show.  One of these actresses was one I hadn't seen on TV or in the movies for  nearly 50 years. There was no memory consolidation involved in the preservation of such memories, involving persons I haven't seen or recalled in countless years.  While watching another "Columbo" episode, I saw Joyce Van Patten, who I remembered as the co-star of some series with Bob Denver, some TV series involving a diner. I hadn't seen the series ("The Good Guys") or read a word about it or thought about it since its cancellation in 1970. No one would have such 50-year memory retention of obscure things if your memories were stored in a brain that replaces its proteins at a rate of 3% per day, and no one would be able to recall such very obscure facts instantly if memories were stored in the brain (something without anything such as indexing allowing fast retrieval). 

The idea that repeated exposures are required for permanent memory formation is nonsense contrary to a large-fraction of human experience.  The only reason neuroscientists spout this nonsense is that they have committed themselves to some theory of neural memory storage. When I hear a PhD speaking the obvious nonsense that humans require repeated exposures of sensory information to have long-term memories of things, it's one of those times when I say to myself, "It takes a professor to be that blind." Isolated in some ivory tower ideological enclave in which adherence to group belief tenets is regarded as mandatory, a professor may start to believe things that make no sense and are contrary to abundant human experience.  He then may end up making some obviously false claim that would never be made by a truck driver or a plumber or anyone else who had not been so conditioned by academia groupthink.