Friday, October 30, 2020

Inaccurate Titles and Misleading Citations Are Common in Science Papers

 I have discussed at some length on this blog problems in science literature such as poor study design, insufficient study group size, occasional fraud, misleading visuals and unreliable techniques for fear measurement. Such things are only some of the many problems to be found in neuroscience papers. Two other very common problems are:

(1) Scientific papers often have inaccurate titles, making some claim that is not actually proven or substantiated by the research discussed in the paper.

(2) Scientific papers often make misleading citations to papers that did nothing to show the claim being made. 

Regarding the first of these problems, scientists often write inaccurate titles to try to get more citations for their papers. For the modern scientist, the number of citations for papers he or she wrote is a supremely important statistic, regarded as a kind of numerical "measure of worth" as important as the batting average or RBI statistic is for a baseball hitter. At a blog entitled "Survival Blog for Scientists" and subtitled "How to Become a Leading Scientist," a blog that tells us  "contributors are scientists in various stages of their career," we have an explanation of why so many science papers have inaccurate titles:

"Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals,  you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content."

citation mania
Is this how today's scientists are trained?

A study of inaccuracy in the titles of scientific papers states, "23.4 % of the titles contain inaccuracies of some kind."

The concept of a misleading citation is best explained with an imaginary example.  In a scientific paper we may see some line such as this:

Research has shown that the XYZ protein is essential for memory.34

Here the number 34 refers to some scientific paper listed at the end of the scientific paper. Now, if the paper listed as paper #34 actually is a scientific paper showing the claim in question, that this XYZ protein is essential for memory, then we have a sound citation. But imagine if the paper does not show any such thing. Then we have a misleading citation.  We have been given the wrong impression that something was established by some other science paper. 

A recent scientific paper entitled "Quotation errors in general science journals" tried to figure out how common such misleading citations are in science papers.  It found that such erroneous citations are not at all rare. Examining 250 randomly selected citations, the paper found an error rate of 25%.  We read the following:

"Throughout all the journals, 75% of the citations were Fully Substantiated. The remaining 25% of the citations contained errors. The least common type of error was Partial Substantiation, making up 14.5% of all errors. Citations that were completely Unsubstantiated made up a more substantial 33.9% of the total errors. However, most of the errors fell into the Impossible to Substantiate category."

When we multiply the 25% figure by 33.9%, we find that according to the study, 8% of citations in science papers are completely unsubstantiated. That is a stunning degree of error. We would perhaps expect such an error rate from careless high-school students, but not from careful scientists. 

This 25% citation error rate found by the study is consistent with other studies on this topic. In the study we read this:

"In a sampling of 21 similar studies across many fields, total quotation error rates varied from 7.8% to 38.2% (with a mean of 22.4%) ...Furthermore, a meta-analysis of 28 quotation error studies in medical literature found an overall quotation error rate of 25.4% [1]. Therefore, the 25% overall quotation error rate of this study is consistent with the other studies."

In the paper we also read the following: "It has been argued through analysis of misprints that only about 20% of authors citing a paper have actually read the original."  If this is true, we can get a better understanding of why so much misinformation is floating around in neuroscience papers.  We repeatedly have paper authors spreading legends of scientific achievement, which are abetted by incorrect paper citations often made by authors who have not even read the papers they are citing.  

A recent article at suggests that scientists are just as likely to make citations to bad research that can't be replicated as they are to make citations to good research. We read the following:

"The researchers find that studies have about the same number of citations regardless of whether they replicated. If scientists are pretty good at predicting whether a paper replicates, how can it be the case that they are as likely to cite a bad paper as a good one? Menard theorizes that many scientists don’t thoroughly check — or even read — papers once published, expecting that if they’re peer-reviewed, they’re fine. Bad papers are published by a peer-review process that is not adequate to catch them — and once they’re published, they are not penalized for being bad papers."

We also read the following troubling comment:

"Blatantly shoddy work is still being published in peer-reviewed journals despite errors that a layperson can see. In many cases, journals effectively aren’t held accountable for bad papers — many, like The Lancet, have retained their prestige even after a long string of embarrassing public incidents where they published research that turned out fraudulent or nonsensical...Even outright frauds often take a very long time to be repudiated, with some universities and journals dragging their feet and declining to investigate widespread misconduct."

Thursday, October 22, 2020

When Mainstream "Science Information" Sites Promote Mind Poisons

 Many people have the idea that if you keep reading mainstream sites that are commonly called "science information" sites, you will become a better citizen. Some people think that if you read such sites, you will frequently be reminded of how bad a problem global warming is, and that you will therefore be moved to reduce your carbon footprint. Other people think that if you read such "science information" sites, you will be a good global citizen, get all of your required vaccinations, and eat genetically modified food like our corporations wish you to do.  

I'm not sure there is any very good evidence that science knowledge causes people to be better global citizens.  These days a person's carbon footprint tends to be proportional to his or her wealth, a factor that is independent of a person's science knowledge. Furthermore, it is possible that after reading the articles on "science information" web sites, you might have a greater tendency to become morally indifferent.  That's because sometimes our mainstream "science information" websites publish articles that might tend to destroy any moral tendencies you had, if you took seriously what you were reading. 

I may use the term "mind poisons" for theories that tend to produce moral indifference in anyone who believes in them. One such theory (occasionally promoted on mainstream "science information" sites) is the theory that there are an infinite number of parallel universes containing an infinite number of copies of you, each a little different.  This insane notion is the idea that every instant the universe is kind of splitting into an infinite number copies of itself, so that every possibility is actualized.  There is no evidence or any good reason for believing in such nonsense, but it is occasionally sold on mainstream "science information" sites as if it was a respectable physics theory.  

It is easy to explain why such a theory promotes moral indifference. If every possibility is happening, and there are an infinite number of copies of you and everyone else, each a little bit different, then there would be no point in ever acting morally. For example, if you were walking along the street, and saw someone bleeding heavily, rather than phoning for help, you would think there was no point in acting, on the grounds that regardless of what you do, there will be an infinite number of parallel universes in which the person survives, and an infinite number of parallel universes in which the person bleeds to death. 

Another example of a morally destructive mind poison is the theory of determinism, the theory that humans do not have free will.  Such a theory is based on the erroneous idea that decisions arise from brain states.  The idea is that you have no free will because your decisions are produced by brain states, that follow inevitably from atomic arrangements. The posts on this site do a good of exploding the rationale for this philosophical theory. There is actually no understanding of how mind or memory can be brain effects, and there are very strong neuroscience reasons for believing that neither mind nor memory can be brain effects. No one has any real understanding of how neurons could ever cause an idea, a memory storage, a memory recollection or a decision.  So your decisions cannot be explained away as mere brain effects, and you very much do have free will. 

It is rather obvious why determinism is a morally destructive idea. If you believe that you have no free will and must act exactly as you act, then you will tend to have no guilt about anything you do. Contrary to all human experience and also contrary to what we know about the brain (something very different from commonly peddled myths about the brain), and being a very morally destructive doctrine, determinism can be accurately described as evil nonsense. 

 But the other day I saw the evil nonsense of determinism being promoted on a widely read web site that is commonly regarded as a "science information" web site. I will not link to the article, because my new policy is never to cause readership for those who teach such morally ruinous absurdities. I may merely note that the blog post promoting this determinism bunk was written by someone who has never shown any signs of being a serious scholar of either mental phenomena or neuroscience.

So these are two cases in which mainstream "science information" sites have promoted morally ruinous mind poisons.  There is a third such case. On some  of the leading sites regarded as "science information" sites, I recently read an article promoting the simulation hypothesis, the hypothesis that you are merely part of some computer simulation set up by extraterrestrials. 

That sites calling themselves "science sites" would be promoting such nonsense is merely additional proof that much of what you read on such sites is neither science nor rational speculation.  We have zero reasons for believing that a computer could ever produce consciousness, and have never observed any computer produce the slightest trace of consciousness.  So believing that you are just part of some computer simulation is as silly as believing that your mother is merely a TV series character that climbed out of your wide-screen TV set. 

The simulation hypothesis is as morally destructive as the other two ideas I previously mentioned, although most people fail to see why that is so.  The reason is that once you believe that you are merely part of a computer simulation created by extraterrestrials, you will tend to doubt that the people you observe with your eyes really exist. 

If some extraterrestrials had caused your consciousness to arise by creating some computer simulation, there is not the slightest reason to think that they would follow some rule that every person observed in the simulation has their own consciousness.  It would be almost infinitely easier to set up a simulation in which most of the bodies seen in the simulation were merely software routines that had no consicousness at all. That would be rather like a video game. In a video game there is a single conscious agent (yourself) interacting with various computer-generated characters that are merely software routines without any consciousness. 

So once a person believes that he is part of a computer simulation created by extraterrestrials, he may  tend to believe that the people he sees in the world are not conscious minds like himself, but merely "characters in the simulation," like video game characters.   That simulation believer will then feel absolutely free to commit any wicked act he pleases, thinking he is not causing any real pain by doing such things.  Similarly, while playing a video game you feel free to cause as much on-the-screen bloodshed as you wish, and don't worry that pain is being caused by such actions that occur in your video game. 

So it should be clear that the simulation hypothesis is a morally destructive doctrine, which may lead someone to kill, injure and rape without having any remorse.  We can therefore accurately say that the simulation hypothesis is a type of mind poison. But exactly this mind poison was being promoted recently on several leading mainstream sites that call themselves "science information" sites. 

Clearly, we must use our critical faculties when reading what is on so-called "science information" sites, because while such sites mainly teach truth, they often promote claims that are untrue or vastly improbable, and occasionally promote mind poisons that are evil nonsense. Sadly, some of the world's worst nonsense is sometimes to be found on mainstream "science information" sites. 

Wednesday, October 14, 2020

The Dubious Comments Under the Neuro-Nonsense Title

 Nautilus magazine is one of those slick "science information" sites where we sometimes get real science and other times get various assorted stuff that is not really science in the sense of being facts. In the latest version of the online magazine, we have an interview with neuroscientist David Eagleman. The interview is found under the ludicrous title "Your Brain Makes You a Different Person Every Day." While it is true that the proteins in the brain have such short lifetimes that an estimated 3% to 4% of your brain proteins are replaced every day, it is false that you are a different person every day.  The persistence and stability of an individual's personality, memory and identity despite such heavy turnover of brain proteins is one of many good reasons for thinking that your mind and memory are not brain effects.  If your brain was the source of your personhood, then given rapid brain protein turnover, you might then be a "different person every day."  But it is not that, and you are not that. 

In the interview, Eagleman claims, "When you learned that my name is David, there’s a physical change in the structure of your brain."  There is no evidence of such a thing.  The claimed evidence (mainly from badly-designed mouse experiments) has a variety of flaws which makes it far less than robust evidence.  No one has ever found a stored memory by examining tissue in a human brain. If the creation of a memory required "a physical change in the structure of the brain," then you could never instantly form a memory. But humans can instantly form permanent new memories.  If someone suddenly sticks a gun in your mouth, you will instantly form a new memory that you will remember the rest of your life. 

Eagleman states, "The brain builds an internal model of the world so it can predict what’s going to happen next."  There is no real evidence that such a thing happens in a brain, and no one has ever found any such thing in a brain.  No neuroscientist can give a coherent and convincing explanation of how a brain could either produce thoughts or predictions.  

Strangely, Eagleman seems to speak as if neurons are fighting each other inside our brains.  He refers to "this aggressive background of neurons fighting against one another." Funny, I can't remember the last time I felt like I was of "two minds" about anything.  In a similar dubious vein of military speculation, Eagleman then says, "my student Don Vaughn and I worked out a model showing that dreaming appears to be a way of keeping the visual cortex defended every night."  That sounds like one of the least plausible theories of dreaming I have ever heard.  Instead of fighting with each other, the cells in the human body show a glorious harmony in their interactions, displaying teamwork more impressive than that of a symphony orchestra or the construction crew of a skyscraper. 

Commendably, the interviewer asks a good question by asking Eagleman about hemispherectomy patients who show little cognitive damage from the removal of half of their brains. Eagleman offers no explanation for why this would occur if the kind of dogmas he teaches are true, other than the very weak statement that "what this means is that half the real estate disappears and yet the whole system figures out how to function." 

The interviewer then commendably says, "There is a backlash to this idea that everything in the mind is reducible to brain science," and asks Eagleman about that.  Eagleman states very incorrectly "that critique has no basis at all." To the contrary, it has a mountainously large basis, consisting of things like the huge amount of evidence discussed in the posts on this site, very much of which consists of papers authored by neuroscientists themselves.  Speaking briefly like a true-believer dogmatist, Eagleman says, "there's no doubt about this idea that you are your brain," but offers no real support for this claim other than making in the next sentence the strange claim that "Every single thing that happens in your life—your history, who you become, what you’ve seen—is stored in your brain."  

That is a claim that in the human brain there is a record of every single thing a human has experienced, a claim that very few neuroscientists have made.  If such a thing were true, it would not at all prove that "you are your brain," since your identity and self-hood and personality are a different thing than your memory.  Since neuroscientists have no credible theory of either memory encoding or long-term memory storage,  given a brain that replaces its proteins at a rate of about 3% per day, the more that humans remember and the longer that humans can remember, the less credible is the theory that memories are stored in brains.  So Eagleman is not helping his case at all by making the strange claim that the brain stores every experience a person has ever had. If people did retain memories of every thing they had ever experienced, it would be all the more harder to explain how that could possibly occur in a brain subject to such rapid turnover and replacement of its proteins. 

Eagleman offers one other little item trying to support his "you are your brain" claim, but it's paltry. He points out a neurotransmitter called dopamine can affect gambling behavior.  But, of course, that does nothing to show that you are your brain. When I had a very bad toothache long ago, it sure affected by behavior, but that didn't show that I am my teeth. And if you sprained your ankle, it would briefly affect your behavior, but it wouldn't show you are your foot. 

Asked about whether "one day we’ll be able to map all the neural connections in someone’s brain and know what kind of person that is," Eagleman says this will never happen in our lifetimes, but "maybe in 300 years, you could read out somebody’s brain."   But if a person believes that the brain stores memories and beliefs, he should be confident that such a thing will soon happen. If brains stored memories and beliefs, we actually should have been able  to read such memories and beliefs decades ago, about the time people were first reading DNA from cells. Maybe somewhere in the back of Eagleman's mind, he knows that neuroscientists are making zero progress in reading memories and beliefs from brains, and that is what caused his pessimistic estimate. 

Towards the end of the interview, Eagleman begins to contradict what he said earlier with such self-assurance. He states, "It appears that consciousness arises from the brain, but there is still a possibility of something else."  When the interviewer commendably follows up on this by saying, "perhaps not everything is generated by the brain" and "we might be tuning in to consciousness somewhere else," Eagleman answers by saying, "I’m not suggesting this is the case, but I am saying this is still a possibility in neuroscience that we have to consider."

So Eagleman ends up contradicting his previous claim that "there's no doubt about this idea that you are your brain."  After speaking like some supremely convinced dogmatist, he now seems to have lost his certitude, and seems to doubt his previous metaphysical claim that he said there was no doubt about.  He ends by saying this regarding a theory of consciousness:  "Not only do we not have a good theory, we don’t even know what a good theory would look like." But such a thought clashes with his claim that "there's no doubt about this idea that you are your brain."

Wednesday, October 7, 2020

Engrams Are Touted Like Phlogiston Was Once Touted

 Scientists were once very convinced that they had figured out how burning works.  They were convinced that things burn because inside them is a combustible element or material called phlogiston, and that during burning this combustible element is released. We now know that this once-cherished theory is entirely wrong.  Like the earlier scientists believing in an incorrect theory of phlogiston, many a neuroscientist believes in the dubious idea that there are engram cells that store memories.  There is no robust evidence for any such thing.  In the post here I discuss some of the very many reasons for rejecting such a theory of neural memory storage. In the post here I discuss some of the flaws in studies that claim to provide evidence for engrams. 

A recent MIT press release claims to have some new evidence for engrams, giving us the not-actually-correct headline "Neuroscientists discover a molecular mechanism that allows memories to form."  You might be impressed by hearing such an announcement from MIT, if you had not read my previous post entitled "Memory Experimenters Have Giant Claims but Low Statistical Power." In that post I examined many cases in which MIT had made impressive-sounding claims about memory research, which were based on studies that tended to be unconvincing because of their too-small study group sizes and low statistical power. It's the same old story in the latest study MIT is touting.  

Here are some phrases I quote from the paper, phrases indicating study group sizes or the number of animals showing some claimed effect:

"n = 3 mice"

"n = 30 mice"

"n = 15 mice"

"n = 3 biologically independent samples" 

"n = 4 mice"

"n = 4 mice"

"n = 4 mice"

"n = 4 mice"

"n = 4 mice"

Alas, we once again have from MIT a memory study that has failed to provide robust evidence. A general rule of thumb is that to get modestly persuasive results, you need to use at least 15 animals per study group.  In the latest MIT study, apparently either much smaller sizes were used for some study groups, or the claimed effects occurred in only a small fraction of the animals, such as 4 out of 15 or 4 out of 30.  In either case, the results are not compelling. My criticisms of such papers for using too-small study group sizes is partially based on the guideline in the paper "Effect size and statistical power in the rodent fear conditioning literature – A systematic review," which mentions an "estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group)," and says that only 12% of neuroscience experiments involving rodents and fear met such a standard. 

To help understand why results involving only four mice are not convincing, let us imagine a large group of 1000 astrologers scanning birth and death data, eagerly looking for spooky correlations.  They might look for things such as this:

  • A match between a father's month of death and his son's month of birth
  • A match between a father's month of death and his son's month of death
  • A match between a father's month of birth and his son's month of birth
  • A match between a father's month of birth and his son's month of death
  • A match between a mother's month of death and her son's month of birth
  • A match between a mother's month of death and her son's month of death
  • A match between a mother's month of birth and her son's month of birth
  • A match between a mother's month of birth and her son's month of death

Now, if one of the astrologers were to show such a match (or a similar correlation), with only a sample size of four, this would be very unconvincing evidence. For it is not very unlikely that four such matches might occur by chance, particularly if there were many astrologers searching for such a match. If the ratio of matches was 4 out of 15 or 4 out of 30, that also would not be convincing, and not very unlikely to occur by chance. But if the sample size was much larger, showing something like 15 out of 15 such matches, that would be compelling evidence for a real effect, being something very unlikely to occur by chance.  Similarly, experimental results in neuroscience papers should not persuade us when only four animals were used, or when 4 out of 15 or 4 out of 30 animals had some claimed effect. There is too big a chance that such results may be mere false alarms, the kind of matches or correlations that might be showing up merely by chance. When thousands of experimental neuroscientists are busily doing experiments and busily scanning data eagerly looking for correlations that can be interpreted as engram evidence, we would expect that very many false alarms would be popping up, particularly when too-small sample sizes were used such as only  four animals, or when low-percentage effects were claimed, such as 4 out of 15 or 4 out of 30. 

Once again, in the Marco paper we have a neuroscience study using mouse zapping.  Typically a study claiming engram evidence will shock a mouse,  and then later send some burst of energy or light to some cells where the scientists think the memory is stored. A claim will be made that this caused the mouse to freeze (in other word, not move) because the burst or energy of light has activated the fearful memory.  Such a methodology is laughable.  For one thing, it is hard to accurately measure the degree of freezing (non-movement) in a mouse, and judgments of a degree of freezing tend to be subjective. A measurement of heart rate (looking for a sudden spike) is a fairly reliable way to measure whether a fearful memory is being recalled, but such a technique is not used in such neuroscience studies. Also, if freezing behavior (non-movement) occurs, we have no way of knowing whether this is caused by a recall of a fearful memory, or whether it is an effect produced by the very burst of energy or light sent into the mouse's brain. It is known that there are many areas of a mouse's brain that if zapped will cause the mouse to show freezing behavior.   (The Marco paper uses the same unreliable technique of judging fear by trying to measure freezing behavior of mice, rather than the reliable technique of measuring heart rate spikes.)  One of quite a few reasons why trying to measure freezing behavior in mice is not a reliable way of determining fear is that fear typically produces in animals the opposite of freezing behavior: a fleeing behavior.  Over my long life I have very many times seen a mouse around my living quarters, but never, ever saw a mouse freeze when I walked near it (the mice always fled instead). 

In the MIT press release, we are told the scientists shocked some genetically modified mice, and that the mice then began to produce some protein marker. We have no way of knowing whether the production of such a protein marker had anything to do with an alleged formation of a memory in the brain. Organisms such as mice are forming new memories all the time, and also producing new proteins all the time. The formation of the protein could have been merely the result of the electrical shocking, not the formation of a new memory.  Or the protein could have formed simply because proteins are constantly forming in the brain, which replaces its proteins at a rate of about 3% per day (as discussed below). Electrically shocking an organism probably produces many a brain effect that has nothing to do with memory formation.  We can compare the brain during electrical shocking to a pin ball machine that lights up in many places at certain times. 

The MIT press release gives a quote by the post-doc researcher Marco that gives us a hint that he may be a bit on the wrong track. We read this:

“ 'The formation and preservation of memory is a very delicate and coordinated event that spreads over hours and days, and might be even months — we don’t know for sure,' Marco says. 'During this process, there are a few waves of gene expression and protein synthesis that make the connections between the neurons stronger and faster.' ”
It is utterly false that the formation of a memory requires "hours and days, and might be even months." To the contrary, we know that  a human being can form permanent new memories instantly.  If someone sexually assaults you or puts a gun in your mouth, you will instantly form a permanent memory of that event that will probably last the rest of your life.  But protein synthesis requires many minutes. The fact that humans can form permanent new memories instantly is one of the strongest reasons for rejecting all claims that memories are formed when engrams (new cells or new cell proteins) are produced.  The formation of neural engrams would necessarily take a length of time sufficient to prevent the instantaneous formation of permanent new memories. 

The ability of humans to form new memories in only three seconds was shown by a scientific experiment discussed in this post. 

We would take much, much longer to acquire new memories if the theory of engrams (neural memory storage) was correct.  Discussing the rate of translation (something that must occur during the synthesis of a new protein), the source here states, "It was found that the rate is quite constant across proteins and is about 6 amino acids per second."  A article agrees, citing a speed of 6 to 9 amino acids per second. The average eukaryotic protein has a length of about 472 amino acids, according to this source.  Dividing 472 by 6, we are left with the conclusion that the synthesis of a new protein must take many minutes.  We cannot be forming new memories by some "engram creation" requiring the synthesis of new proteins, because we can acquire new memories instantly. 

engrams debunked
The 2018 paper here gives us a reason for rejecting all claims that memories are stored in brains. The paper finds that proteins in the human brain are replaced at a rate of about 3% to 4% per day. Unlike very many neuroscientists, who seem very skilled at ignoring the implications of their own findings, the authors actually seem to have a clue about the implications of their research. We read the following:

"Here we show that brain tissue turns over much faster at a rate of 3–4% per day. This would imply complete renewal of brain tissue proteins well within 4–5 weeks. From a physiological viewpoint this is astounding, as it provides us with a much greater framework for the capacity of brain tissue to recondition. Moreover, from a philosophical perspective these observations are even more surprising. If rapid protein turnover of brain tissue implies that all organic material is renewed, then all data internalized in that tissue are also prone to renewal. These findings spark (even) more debate on the interpretation and (long-term) storage of data in neural matter, the capacity of humans to consciously or unconsciously process data, and the (organic) basis of our own personality and ego." 

The authors rightly seem to be hesitating about whether there actually is an organic basis for our personality and ego Given a protein replacement rate of 3% per day in the brain, we would not be able to remember things for more than about 35 days if our memories were created as brain engrams.  

Postscript: This month the Science Daily site (which so often has hyped headlines not matching any robust research) has been showing a headline of "New Player in Long Term Memory."  The article is about a paper that suffers from the same problems as the paper discussed above.  The paper provides no real evidence for any physical effect in the brain causing memory consolidation.  Examining the paper, I find the same old problems that are found again and again and again in papers of this type, such as the following:

(1) Too-small study group sizes, with several being less than 8 animals per study group (15 is the minimum for a moderately reliable result).
(2) A study involving only mice, not humans.
(3) A use of an unreliable method for judging fear in animals (trying to measure the amount of time a mouse is "frozen" in fear), rather than use of a reliable fear-detection method such as measuring heart rate spikes. 
(4) Citations to other papers that suffered from the same type of problems.

Looking further at the Marco paper (which is behind a paywall, but kindly provided to me by a scientist), I see other methodological problems with it. For one thing, mouse brains were studied hours  after some foot-shocking of mice,  which means there wasn't any real-time matching between a memory creation event and something happening in a brain.  The paper also informs us that "blinding was not applied in the behavioral studies (CFC) and imaging acquisition because animals and samples need to be controlled by treatment or conditions."  Blinding is a very important procedural precaution to prevent biased data acquisition and biased analysis, and we should be suspicious of experimental studies that fail to thoroughly implement blinding protocols.  The paper also makes no claim to be a pre-registered study. When a study does not pre-register a hypothesis to be tested, the scientists running the study are free to go on a "fishing expedition" looking in countless places for some type of association or correlation; and in such cases there is a large chance of false alarms occurring.