Friday, November 20, 2020

No One Can Credibly Explain Why a Brain Would Store a Memory in One Specific Spot

 The theory of the Christmas activity of Santa Claus is one that very small children will accept, but a theory that a child will discard once he gets a little older.  There are too many obvious detects in the theory for a mature mind to hold it: the impossibility of fitting toys for all the world's children in a single sled, the impossibility of such a sled being able to deliver millions of toys on a single night, and the impossibility of Santa Claus getting into so many locked homes.  Like the theory of Santa Claus, the theory that brains store memories does not hold up well to scrutiny.  Among dozens of good reasons for rejecting the theory, there are:

  • the fact that brain proteins have a lifetime of less than two weeks, which is 1000 times shorter than the longest length of time that humans can remember things (60 years or so);
  • the fact that no one has any coherent explanation as to how human learned knowledge could ever be translated into neural states or synapse states;
  • the fact that humans can form new memories instantly, much faster than the time required for some kind of cellular or synapse modification to occur;
  • the fact that no one has ever found any trace of stored information (other than the DNA information in all cells) by studying brain tissue;
  • the fact that removing half of someone's brain (as is sometimes done to treat epilepsy patients) has little effect on memory;
  • the fact that no one can explain how a brain (without any indexing system and without any position notation system) could ever instantly find the exact spot where some memory was stored in it, which would be like instantly finding a needle in a haystack. 
The more we scrutinize the theory that memories are stored in brains, the more problems we become aware of. Let me discuss a problem that was not one of the 30 reasons I previously gave for rejecting the claim that memories are stored in brain, but a different reason.  I refer to the problem that no one can give a credible explanation as to why a brain would store a memory in one specific spot in the brain. 

Let us consider some examples of information storage, and consider the question:  when a piece of information is stored, why is it stored at the specific place that it is stored?

Example of information storage

Why is the information stored in the specific spot where it is stored?

Arrival of a new email

All new emails are put at the top of a “stack” of emails

A student taking notes on one day in a class on some subject

The student selects a subject notebook, and writes at the first blank page of the notebook

A person making a diary entry

The person makes the entry in whatever page is marked with a date corresponding to that day's date

You save a new file on your computer

You are provided an interface allowing you to select some folder or directory on your digital device. After you choose a name for the file, the operating system in your computer creates a new file in the specified location, using an operating system routine for selecting empty space in that location.

You buy a book, and take it to your house

You manually select at random an empty space on a bookshelf, and put the book there

You receive an important letter you want to save

You select the appropriate file folder in your file box or file cabinet, and stick the letter in that file folder

You add an item to a “to do list” document you have on your computer

You simply scroll down to the end of your document, and write the new item at the end of the document

You just type some new text in whatever computer document you are currently working on.

Within your document is a blinking cursor that represents the current position, and your newly typed text is added at that position in your document.

You take a new photo with your digital camera. 

The digital storage card in your camera is like a stack of photos, and each new photo gets added to the end or beginning of the stack.

So we can see that when information is physically stored, there are specific reasons why particular items of information get stored in specific locations. Let us now consider the human brain, and the theory that a new memory gets stored in some tiny little spot in the brain. Such a theory raises the question: why would a brain store some new memory exactly at that spot, rather than any of 10,000 other little spots in the brain? There are various possibilities you can imagine, but none of them seem to be credible. 

One possibility you might imagine is that a brain puts a new memory kind of "at the top of the stack" or "at one end of a chain." Being very imaginative, you can imagine extraterrestrial organisms that might have some kind of stack-like brain or chain-like brain, so that the organism might put each new memory at the top of such a stack or at one end of such a chain. But the human brain bears no resemblance to a chain or a stack. There is no "end writing position" or "first writing position" in the brain to which a brain could write if it were following a "put new information at the end" rule, or "put new information at the beginning" rule. 

Another possibilty you might imagine is that a brain might have something like a cursor or a movable write unit that moves from place to place in the brain to write memories at different locations. If the brain had such a thing, we could explain why a brain would store a memory in one specific spot. The explanation would simply be that the writing of a new memory occurs at whatever location the cursor or movable write unit is located. However, the human brain has no such thing as a cursor or movable write unit.  There is nothing that moves around in the brain other than electricity and chemicals.  We can certainly imagine some strange extraterrrestrial organism with a brain including a movable writing unit having the job of moving around in the brain and writing to different locations, but there is no sign of any such thing in the brain. 

You do not get around this difficulty of explaining why storage would occur at some exact location by speculating that there is one tiny brain region (such as the hippocampus)  where the brain stores all its new memories.  For such a region of the brain would consist of 10,000 smaller sub-regions, and the question would always remain: why was the memory put in one specific spot rather than in any of the other 10,000 spots?

We cannot get around this difficulty by imagining that a brain simply selects a random brain location to write some memory.  The selection of one specific random location is something that a human mind or a computer program can do, but there is no evidence that the human body ever subconsciously selects a random location in the body.  If you ask me to select a random city in America, I have knowledge of the cities in America and a mind capable of performing such a random selection task.  But it would be absurd to maintain that a brain has some kind of subconscious knowledge of some set of possible brain locations where a memory could be written, and some kind of subconscious ability to make a random choice from such a set of locations, choosing subconsciously a random place to write a memory. Nor could we ever explain how a brain (completely lacking in any coordinate system or position location system) could ever cause a memory to be stored exactly in some precise spot that it had randomly selected. Such a thing would be as hard as writing to hay strand #282,035 after your mind had randomly chosen such a hay strand as the place in a huge hay stack where something should be written.  

You also do not get around this difficulty by speculating that a brain stores a single new memory in very many separate spots, as that creates a host of difficulties such as how the memory could be divided up into so many different spots, and how the information could be instantly distributed to so many different spots. Then there would be the extremely great difficulty that a memory stored in many different spots would be like scattering each word on a page so that each word was stored in a different spot in your home. Just as such a thing would make it a thousand times harder to instantly retrieve the information on the page, a memory scattered among a thousand different brain places would be vastly harder to retrieve, making it all the more harder to explain how humans are able to instantly retrieve a memory.  Moreover, if we imagine a thousand different storage locations for a single memory, then we simply have the original problem a thousand times worse; for the question would be: why were those thousand locations chosen rather than any of a million other possibilities for the thousand places to store the memory?

There is no credible theory of how a neurally stored memory would end up in one specific spot in the brain, rather than any of a thousand other little spots in the brain.  What I have discussed here is only one of very many reasons why the idea of a neural storage of memories is untenable. 

Let us consider a case in which a memory arises, and what neuroscientists would need to explain under the theory that memories are stored in brains. Let's imagine a case in which a 13-year-old boy is scared very bad when someone sticks a gun in his mouth. The boy grows into a man who remembers this event for 70 years; and whenever he sees a hand gun (even guns with a different color or caliber), he instantly thinks of that moment when someone placed a gun in his mouth. Here are the things that would need to be explained under the theory that memories are stored in brains.
  • How a brain could instantly form a permanent memory (for such a memory would appear instantaneously as soon as this traumatic event occurred), at a speed many times faster than the minutes required for some protein synthesis needed for synapse strengthening or synapse modification. 
  • How a brain could translate into neural states or synapse states this sensory experience of having a gun placed in your mouth.
  • How a brain could somehow select some location (among countless thousands of brain spots) for this memory to be stored.
  • How a brain could somehow find such a location inside a brain that has no coordinate system and no position notation system, so that the memory could be stored in such a location.
  • How a brain could instantly retrieve this memory whenever the boy saw a gun, which would be like instantly finding a needle in a haystack, given a brain with no coordinate system and no position notation system.
  • Why such a memory could be retrieved by a brain, even when the person saw guns of a different color and caliber than the gun that was inserted in his mouth. 
  • How this neural memory trace would somehow be translated into a recollection briefly active in the person's mind after he saw a gun years later. 
  • How this memory could ever be accurately stored and accurately recalled (with a transmission across innumerable synapses) in a brain with so much signal noise that each time a signal passes across a synapse, it is transmitted with a reliability of less than 50%. 
  • How this memory could be preserved for 70 years, in a brain consisting of proteins with such short lifetimes (two weeks or less) that 3% of the brain's proteins are replaced every day. 
To  explain this case of the boy instantly forming this memory in a brain and remembering it for 70 years, neuroscientists would need to explain all of these things. Neuroscientists cannot even give a credible explanation for any one of these things.  

Sunday, November 8, 2020

Preprint Server Counts Suggest Engrams Are Not Really Science

 The arXiv science paper server at https://arxiv.org/ is a widely used resource for finding and reading scientific papers. On its home page we read, "arXiv is a free distribution service and an open-access archive for 1,780,158 scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics."  It has become something of a custom for physicists to upload "preprints" of physics papers to this server. Although mainly associated with physics papers, the server also has a huge number of quantitative biology papers. 

An interesting way to use the arXiv server is simply to search for a topic, and see how high the paper count is (in other words, how many papers the server has on a particular topic). Such a method gives a rough idea of how much work has been done on a particular topic. It is not at all true that you can prove something is really science by doing a search for some topic and getting a high paper count. For example, when I search for papers with the word "string" in the title, on October 23, 2020, I get a count of 12,766 papers, a large fraction of which are papers expounding versions of string theory. But string theory is a speculative edifice that is not at all "science with a capital S," and has no observational basis. 

While we can't tell that something is science just by searching for a topic and getting a high paper count, if we search for a topic and get a very low count, that is a reason for suspecting that the topic may not be any such thing as "science with a capital S."  That's what happens when I search for the topic of "engram."  An engram is an alleged brain location where a memory is stored, or some kind of "memory trace" in the brain.  When I search for papers having "engram" or "engrams" in their title, using the arXiv science paper server, the server gives me a count of 0 such papers. 

Could it be that the arXiv science paper server just doesn't have many papers on biology? No, it has tons of papers on quantitative biology.  Below are a few examples of paper counts when I search for some biology topics:

Topic

Number of papers on arXiv server having that topic in their title

cancer

1115 papers

COVID-19

 1738 papers

brain

 2046 papers

tissue

708 papers

engram

0 papers

engrams

0 papers

So how come the server gives us no papers when we search for "engram" as the topic? Maybe it's because engrams aren't really science with a capital S. 

There's another way to do a search on the arXiv server. You can search for any use of the search topic in the abstract of the paper. When I do such a search, I get only 5 papers. Four of the five papers have no solid observational grounding, and are the kind of mathematical speculation papers that scientists write when they attempt to substantiate very doubtful speculations such as string theory or dark energy or primordial cosmic inflation.  The only paper built upon observations is a paper entitled "Recording and Reproduction of Pattern Memory Trace in EEG by Direct Electrical Stimulation of Brain Cortex."  The paper does not actually provide robust evidence that any such thing as a memory trace was detected.  To do such thing, you would need to have a study group of at least 15 animals, but we read in the paper that "the experiments were performed on 5 outbred male rats."  Using such a too-small study group, you have too high a chance of a false alarm.  

There is another "preprint paper server," one more oriented toward biology papers.  It is called bioXriv, and bills itself as "the preprint server for biology."  When I use that server to look for papers that contain "engram" in the title, I get only 6 papers.  Below is a comparision with other topics:

Topic

Number of papers on biorXiv server having that topic in their title

cancer

2777 papers

COVID-19

 376 papers

brain

 2651 papers

tissue

1021 papers

engram

6 papers

engrams

8 papers

The first of these six papers using "engram" in its title is a speculative paper with no observational grounding. The second of these six  papers uses study group sizes of only 5, which are way too small to provide any robust result.  The third paper has a similar problem, using study group sizes of only 8, way too small to provide any robust result.  The fourth paper is a mouse study that fails to mention anywhere how many mice were used, which typically occurs only when some way-too-small study group size was used.  The fifth paper suffers from the same problem, the only difference being that it vaguely suggests that way-too-small study group sizes of only 4 were used.  The sixth paper uses way-too-small study group sizes of only about six. 

Now let's look at the eight papers using "engrams" in their title. The first paper has "schematic" visuals based on imaginary hypotheticals.  The second paper tries to use the word "engrams" as much as it can, but provides no physical evidence for such a thing. The third paper was a rodent study using study group sizes of only about 8, way too small for a robust result.  The fourth paper was a rodent study using study group sizes of only about 5, way too small for a robust result. The paper confesses, "Data collection and analysis were not performed blind to the conditions of the experiments," a major procedural defect. The fifth paper is a theoretical paper not providing any observational results. The sixth paper and the seventh paper used way-too-small study group sizes of only 5.  The eighth paper is merely a theoretical work based on mathematical simulations. 

So the only six papers on the biorXiv server mentioning "engram" in their title fail to provide any robust evidence of engrams. Its the same thing for the 8 papers using "engrams" in their title. All in all, we have in these very low server counts (and the weaknesses of the papers coming up in the searches) a strong suggestion that engrams (supposed neural storage sites for memories) are not any such thing as well-established science, and that the evidence for engrams is merely very weak evidence rarely conjured up by scientists clumsily trying to provide some evidence for something they want to believe in. Engrams are not an example of science with a capital S. 

My criticisms of such papers for using too small study group sizes is partially based on the guideline in the paper "Effect size and statistical power in the rodent fear conditioning literature – A systematic review," which mentions an "estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group)," and says that only 12% of neuroscience experiments involving rodents and fear met such a standard. 

None of these papers I have referred to (on either preprint server) claims to have used a blinding protocol for both data gathering and data analysis. Most of them make no claims about blinding, which is usually a sure sign that no blinding protocol was followed. One paper makes a brief claim to have used a blinding protocol for experimentation, but makes no such claim for data analysis. Another paper claims briefly to have used a blinding protocol for statistical analysis, but makes no such claim in regard to experimentation and data gathering.  None of these papers describes in detail a specific blinding protocol. 

When blinding protocols are not thoroughly implemented, there is a large chance of bias and scientists reporting hoped-for effects that are not really there.  Unless a paper describes in detail a blinding protocol, you should be rather skeptical that any halfway-decent blinding protocol was used.  Similarly, if someone says, "I paid all my taxes," but doesn't release his tax forms, you should be rather skeptical that he did pay all his taxes. 

The failure of experimental neuroscientists to adequately follow blinding protocols is a huge problem in contemporary neuroscience research, as big as the failure of most such neuroscientists to use adequate study group sizes. Be suspicious of junk science wherever you find experiments not using proper blinding protocols.  A PLOS Biology article tells us, "Recent analyses have found, for example, that 86%–87% of papers reporting animal studies did not describe randomisation and blinding methods, and more than 95% of them did not report on the statistical power of the studies to detect a difference between experimental groups." 

Ian Stevenson MD once made some candid comments relevant to the topic of engrams, stating this:

"Neuroscientists and psychologists cannot tell us either how we store memories or how we retrieve them. Suggestions that experiences leave 'traces' in the brain (whether in altered neural networks or otherwise) have not so far led to further understanding." 

red flags of junk science

Friday, October 30, 2020

Inaccurate Titles and Misleading Citations Are Common in Science Papers

 I have discussed at some length on this blog problems in science literature such as poor study design, insufficient study group size, occasional fraud, misleading visuals and unreliable techniques for fear measurement. Such things are only some of the many problems to be found in neuroscience papers. Two other very common problems are:

(1) Scientific papers often have inaccurate titles, making some claim that is not actually proven or substantiated by the research discussed in the paper.

(2) Scientific papers often make misleading citations to papers that did nothing to show the claim being made. 

Regarding the first of these problems, scientists often write inaccurate titles to try to get more citations for their papers. For the modern scientist, the number of citations for papers he or she wrote is a supremely important statistic, regarded as a kind of numerical "measure of worth" as important as the batting average or RBI statistic is for a baseball hitter. At a blog entitled "Survival Blog for Scientists" and subtitled "How to Become a Leading Scientist," a blog that tells us  "contributors are scientists in various stages of their career," we have an explanation of why so many science papers have inaccurate titles:

"Scientists need citations for their papers....If the content of your paper is a dull, solid investigation and your title announces this heavy reading, it is clear you will not reach your citation target, as your department head will tell you in your evaluation interview. So to survive – and to impress editors and reviewers of high-impact journals,  you will have to hype up your title. And embellish your abstract. And perhaps deliberately confuse the reader about the content."

citation mania
Is this how today's scientists are trained?

A study of inaccuracy in the titles of scientific papers states, "23.4 % of the titles contain inaccuracies of some kind."

The concept of a misleading citation is best explained with an imaginary example.  In a scientific paper we may see some line such as this:

Research has shown that the XYZ protein is essential for memory.34

Here the number 34 refers to some scientific paper listed at the end of the scientific paper. Now, if the paper listed as paper #34 actually is a scientific paper showing the claim in question, that this XYZ protein is essential for memory, then we have a sound citation. But imagine if the paper does not show any such thing. Then we have a misleading citation.  We have been given the wrong impression that something was established by some other science paper. 

A recent scientific paper entitled "Quotation errors in general science journals" tried to figure out how common such misleading citations are in science papers.  It found that such erroneous citations are not at all rare. Examining 250 randomly selected citations, the paper found an error rate of 25%.  We read the following:

"Throughout all the journals, 75% of the citations were Fully Substantiated. The remaining 25% of the citations contained errors. The least common type of error was Partial Substantiation, making up 14.5% of all errors. Citations that were completely Unsubstantiated made up a more substantial 33.9% of the total errors. However, most of the errors fell into the Impossible to Substantiate category."

When we multiply the 25% figure by 33.9%, we find that according to the study, 8% of citations in science papers are completely unsubstantiated. That is a stunning degree of error. We would perhaps expect such an error rate from careless high-school students, but not from careful scientists. 

This 25% citation error rate found by the study is consistent with other studies on this topic. In the study we read this:

"In a sampling of 21 similar studies across many fields, total quotation error rates varied from 7.8% to 38.2% (with a mean of 22.4%) ...Furthermore, a meta-analysis of 28 quotation error studies in medical literature found an overall quotation error rate of 25.4% [1]. Therefore, the 25% overall quotation error rate of this study is consistent with the other studies."

In the paper we also read the following: "It has been argued through analysis of misprints that only about 20% of authors citing a paper have actually read the original."  If this is true, we can get a better understanding of why so much misinformation is floating around in neuroscience papers.  We repeatedly have paper authors spreading legends of scientific achievement, which are abetted by incorrect paper citations often made by authors who have not even read the papers they are citing.  

A recent article at Vox.com suggests that scientists are just as likely to make citations to bad research that can't be replicated as they are to make citations to good research. We read the following:

"The researchers find that studies have about the same number of citations regardless of whether they replicated. If scientists are pretty good at predicting whether a paper replicates, how can it be the case that they are as likely to cite a bad paper as a good one? Menard theorizes that many scientists don’t thoroughly check — or even read — papers once published, expecting that if they’re peer-reviewed, they’re fine. Bad papers are published by a peer-review process that is not adequate to catch them — and once they’re published, they are not penalized for being bad papers."

We also read the following troubling comment:

"Blatantly shoddy work is still being published in peer-reviewed journals despite errors that a layperson can see. In many cases, journals effectively aren’t held accountable for bad papers — many, like The Lancet, have retained their prestige even after a long string of embarrassing public incidents where they published research that turned out fraudulent or nonsensical...Even outright frauds often take a very long time to be repudiated, with some universities and journals dragging their feet and declining to investigate widespread misconduct."

Thursday, October 22, 2020

When Mainstream "Science Information" Sites Promote Mind Poisons

 Many people have the idea that if you keep reading mainstream sites that are commonly called "science information" sites, you will become a better citizen. Some people think that if you read such sites, you will frequently be reminded of how bad a problem global warming is, and that you will therefore be moved to reduce your carbon footprint. Other people think that if you read such "science information" sites, you will be a good global citizen, get all of your required vaccinations, and eat genetically modified food like our corporations wish you to do.  

I'm not sure there is any very good evidence that science knowledge causes people to be better global citizens.  These days a person's carbon footprint tends to be proportional to his or her wealth, a factor that is independent of a person's science knowledge. Furthermore, it is possible that after reading the articles on "science information" web sites, you might have a greater tendency to become morally indifferent.  That's because sometimes our mainstream "science information" websites publish articles that might tend to destroy any moral tendencies you had, if you took seriously what you were reading. 

I may use the term "mind poisons" for theories that tend to produce moral indifference in anyone who believes in them. One such theory (occasionally promoted on mainstream "science information" sites) is the theory that there are an infinite number of parallel universes containing an infinite number of copies of you, each a little different.  This insane notion is the idea that every instant the universe is kind of splitting into an infinite number copies of itself, so that every possibility is actualized.  There is no evidence or any good reason for believing in such nonsense, but it is occasionally sold on mainstream "science information" sites as if it was a respectable physics theory.  

It is easy to explain why such a theory promotes moral indifference. If every possibility is happening, and there are an infinite number of copies of you and everyone else, each a little bit different, then there would be no point in ever acting morally. For example, if you were walking along the street, and saw someone bleeding heavily, rather than phoning for help, you would think there was no point in acting, on the grounds that regardless of what you do, there will be an infinite number of parallel universes in which the person survives, and an infinite number of parallel universes in which the person bleeds to death. 

Another example of a morally destructive mind poison is the theory of determinism, the theory that humans do not have free will.  Such a theory is based on the erroneous idea that decisions arise from brain states.  The idea is that you have no free will because your decisions are produced by brain states, that follow inevitably from atomic arrangements. The posts on this site do a good of exploding the rationale for this philosophical theory. There is actually no understanding of how mind or memory can be brain effects, and there are very strong neuroscience reasons for believing that neither mind nor memory can be brain effects. No one has any real understanding of how neurons could ever cause an idea, a memory storage, a memory recollection or a decision.  So your decisions cannot be explained away as mere brain effects, and you very much do have free will. 

It is rather obvious why determinism is a morally destructive idea. If you believe that you have no free will and must act exactly as you act, then you will tend to have no guilt about anything you do. Contrary to all human experience and also contrary to what we know about the brain (something very different from commonly peddled myths about the brain), and being a very morally destructive doctrine, determinism can be accurately described as evil nonsense. 

 But the other day I saw the evil nonsense of determinism being promoted on a widely read web site that is commonly regarded as a "science information" web site. I will not link to the article, because my new policy is never to cause readership for those who teach such morally ruinous absurdities. I may merely note that the blog post promoting this determinism bunk was written by someone who has never shown any signs of being a serious scholar of either mental phenomena or neuroscience.

So these are two cases in which mainstream "science information" sites have promoted morally ruinous mind poisons.  There is a third such case. On some  of the leading sites regarded as "science information" sites, I recently read an article promoting the simulation hypothesis, the hypothesis that you are merely part of some computer simulation set up by extraterrestrials. 

That sites calling themselves "science sites" would be promoting such nonsense is merely additional proof that much of what you read on such sites is neither science nor rational speculation.  We have zero reasons for believing that a computer could ever produce consciousness, and have never observed any computer produce the slightest trace of consciousness.  So believing that you are just part of some computer simulation is as silly as believing that your mother is merely a TV series character that climbed out of your wide-screen TV set. 

The simulation hypothesis is as morally destructive as the other two ideas I previously mentioned, although most people fail to see why that is so.  The reason is that once you believe that you are merely part of a computer simulation created by extraterrestrials, you will tend to doubt that the people you observe with your eyes really exist. 

If some extraterrestrials had caused your consciousness to arise by creating some computer simulation, there is not the slightest reason to think that they would follow some rule that every person observed in the simulation has their own consciousness.  It would be almost infinitely easier to set up a simulation in which most of the bodies seen in the simulation were merely software routines that had no consicousness at all. That would be rather like a video game. In a video game there is a single conscious agent (yourself) interacting with various computer-generated characters that are merely software routines without any consciousness. 

So once a person believes that he is part of a computer simulation created by extraterrestrials, he may  tend to believe that the people he sees in the world are not conscious minds like himself, but merely "characters in the simulation," like video game characters.   That simulation believer will then feel absolutely free to commit any wicked act he pleases, thinking he is not causing any real pain by doing such things.  Similarly, while playing a video game you feel free to cause as much on-the-screen bloodshed as you wish, and don't worry that pain is being caused by such actions that occur in your video game. 

So it should be clear that the simulation hypothesis is a morally destructive doctrine, which may lead someone to kill, injure and rape without having any remorse.  We can therefore accurately say that the simulation hypothesis is a type of mind poison. But exactly this mind poison was being promoted recently on several leading mainstream sites that call themselves "science information" sites. 

Clearly, we must use our critical faculties when reading what is on so-called "science information" sites, because while such sites mainly teach truth, they often promote claims that are untrue or vastly improbable, and occasionally promote mind poisons that are evil nonsense. Sadly, some of the world's worst nonsense is sometimes to be found on mainstream "science information" sites. 

Wednesday, October 14, 2020

The Dubious Comments Under the Neuro-Nonsense Title

 Nautilus magazine is one of those slick "science information" sites where we sometimes get real science and other times get various assorted stuff that is not really science in the sense of being facts. In the latest version of the online magazine, we have an interview with neuroscientist David Eagleman. The interview is found under the ludicrous title "Your Brain Makes You a Different Person Every Day." While it is true that the proteins in the brain have such short lifetimes that an estimated 3% to 4% of your brain proteins are replaced every day, it is false that you are a different person every day.  The persistence and stability of an individual's personality, memory and identity despite such heavy turnover of brain proteins is one of many good reasons for thinking that your mind and memory are not brain effects.  If your brain was the source of your personhood, then given rapid brain protein turnover, you might then be a "different person every day."  But it is not that, and you are not that. 

In the interview, Eagleman claims, "When you learned that my name is David, there’s a physical change in the structure of your brain."  There is no evidence of such a thing.  The claimed evidence (mainly from badly-designed mouse experiments) has a variety of flaws which makes it far less than robust evidence.  No one has ever found a stored memory by examining tissue in a human brain. If the creation of a memory required "a physical change in the structure of the brain," then you could never instantly form a memory. But humans can instantly form permanent new memories.  If someone suddenly sticks a gun in your mouth, you will instantly form a new memory that you will remember the rest of your life. 

Eagleman states, "The brain builds an internal model of the world so it can predict what’s going to happen next."  There is no real evidence that such a thing happens in a brain, and no one has ever found any such thing in a brain.  No neuroscientist can give a coherent and convincing explanation of how a brain could either produce thoughts or predictions.  

Strangely, Eagleman seems to speak as if neurons are fighting each other inside our brains.  He refers to "this aggressive background of neurons fighting against one another." Funny, I can't remember the last time I felt like I was of "two minds" about anything.  In a similar dubious vein of military speculation, Eagleman then says, "my student Don Vaughn and I worked out a model showing that dreaming appears to be a way of keeping the visual cortex defended every night."  That sounds like one of the least plausible theories of dreaming I have ever heard.  Instead of fighting with each other, the cells in the human body show a glorious harmony in their interactions, displaying teamwork more impressive than that of a symphony orchestra or the construction crew of a skyscraper. 

Commendably, the interviewer asks a good question by asking Eagleman about hemispherectomy patients who show little cognitive damage from the removal of half of their brains. Eagleman offers no explanation for why this would occur if the kind of dogmas he teaches are true, other than the very weak statement that "what this means is that half the real estate disappears and yet the whole system figures out how to function." 

The interviewer then commendably says, "There is a backlash to this idea that everything in the mind is reducible to brain science," and asks Eagleman about that.  Eagleman states very incorrectly "that critique has no basis at all." To the contrary, it has a mountainously large basis, consisting of things like the huge amount of evidence discussed in the posts on this site, very much of which consists of papers authored by neuroscientists themselves.  Speaking briefly like a true-believer dogmatist, Eagleman says, "there's no doubt about this idea that you are your brain," but offers no real support for this claim other than making in the next sentence the strange claim that "Every single thing that happens in your life—your history, who you become, what you’ve seen—is stored in your brain."  

That is a claim that in the human brain there is a record of every single thing a human has experienced, a claim that very few neuroscientists have made.  If such a thing were true, it would not at all prove that "you are your brain," since your identity and self-hood and personality are a different thing than your memory.  Since neuroscientists have no credible theory of either memory encoding or long-term memory storage,  given a brain that replaces its proteins at a rate of about 3% per day, the more that humans remember and the longer that humans can remember, the less credible is the theory that memories are stored in brains.  So Eagleman is not helping his case at all by making the strange claim that the brain stores every experience a person has ever had. If people did retain memories of every thing they had ever experienced, it would be all the more harder to explain how that could possibly occur in a brain subject to such rapid turnover and replacement of its proteins. 

Eagleman offers one other little item trying to support his "you are your brain" claim, but it's paltry. He points out a neurotransmitter called dopamine can affect gambling behavior.  But, of course, that does nothing to show that you are your brain. When I had a very bad toothache long ago, it sure affected by behavior, but that didn't show that I am my teeth. And if you sprained your ankle, it would briefly affect your behavior, but it wouldn't show you are your foot. 

Asked about whether "one day we’ll be able to map all the neural connections in someone’s brain and know what kind of person that is," Eagleman says this will never happen in our lifetimes, but "maybe in 300 years, you could read out somebody’s brain."   But if a person believes that the brain stores memories and beliefs, he should be confident that such a thing will soon happen. If brains stored memories and beliefs, we actually should have been able  to read such memories and beliefs decades ago, about the time people were first reading DNA from cells. Maybe somewhere in the back of Eagleman's mind, he knows that neuroscientists are making zero progress in reading memories and beliefs from brains, and that is what caused his pessimistic estimate. 

Towards the end of the interview, Eagleman begins to contradict what he said earlier with such self-assurance. He states, "It appears that consciousness arises from the brain, but there is still a possibility of something else."  When the interviewer commendably follows up on this by saying, "perhaps not everything is generated by the brain" and "we might be tuning in to consciousness somewhere else," Eagleman answers by saying, "I’m not suggesting this is the case, but I am saying this is still a possibility in neuroscience that we have to consider."

So Eagleman ends up contradicting his previous claim that "there's no doubt about this idea that you are your brain."  After speaking like some supremely convinced dogmatist, he now seems to have lost his certitude, and seems to doubt his previous metaphysical claim that he said there was no doubt about.  He ends by saying this regarding a theory of consciousness:  "Not only do we not have a good theory, we don’t even know what a good theory would look like." But such a thought clashes with his claim that "there's no doubt about this idea that you are your brain."

Wednesday, October 7, 2020

Engrams Are Touted Like Phlogiston Was Once Touted

 Scientists were once very convinced that they had figured out how burning works.  They were convinced that things burn because inside them is a combustible element or material called phlogiston, and that during burning this combustible element is released. We now know that this once-cherished theory is entirely wrong.  Like the earlier scientists believing in an incorrect theory of phlogiston, many a neuroscientist believes in the dubious idea that there are engram cells that store memories.  There is no robust evidence for any such thing.  In the post here I discuss some of the very many reasons for rejecting such a theory of neural memory storage. In the post here I discuss some of the flaws in studies that claim to provide evidence for engrams. 

A recent MIT press release claims to have some new evidence for engrams, giving us the not-actually-correct headline "Neuroscientists discover a molecular mechanism that allows memories to form."  You might be impressed by hearing such an announcement from MIT, if you had not read my previous post entitled "Memory Experimenters Have Giant Claims but Low Statistical Power." In that post I examined many cases in which MIT had made impressive-sounding claims about memory research, which were based on studies that tended to be unconvincing because of their too-small study group sizes and low statistical power. It's the same old story in the latest study MIT is touting.  

Here are some phrases I quote from the paper, phrases indicating study group sizes or the number of animals showing some claimed effect:

"n = 3 mice"

"n = 30 mice"

"n = 15 mice"

"n = 3 biologically independent samples" 

"n = 4 mice"

"n = 4 mice"

"n = 4 mice"

"n = 4 mice"

"n = 4 mice"

Alas, we once again have from MIT a memory study that has failed to provide robust evidence. A general rule of thumb is that to get modestly persuasive results, you need to use at least 15 animals per study group.  In the latest MIT study, apparently either much smaller sizes were used for some study groups, or the claimed effects occurred in only a small fraction of the animals, such as 4 out of 15 or 4 out of 30.  In either case, the results are not compelling. My criticisms of such papers for using too-small study group sizes is partially based on the guideline in the paper "Effect size and statistical power in the rodent fear conditioning literature – A systematic review," which mentions an "estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group)," and says that only 12% of neuroscience experiments involving rodents and fear met such a standard. 

To help understand why results involving only four mice are not convincing, let us imagine a large group of 1000 astrologers scanning birth and death data, eagerly looking for spooky correlations.  They might look for things such as this:

  • A match between a father's month of death and his son's month of birth
  • A match between a father's month of death and his son's month of death
  • A match between a father's month of birth and his son's month of birth
  • A match between a father's month of birth and his son's month of death
  • A match between a mother's month of death and her son's month of birth
  • A match between a mother's month of death and her son's month of death
  • A match between a mother's month of birth and her son's month of birth
  • A match between a mother's month of birth and her son's month of death

Now, if one of the astrologers were to show such a match (or a similar correlation), with only a sample size of four, this would be very unconvincing evidence. For it is not very unlikely that four such matches might occur by chance, particularly if there were many astrologers searching for such a match. If the ratio of matches was 4 out of 15 or 4 out of 30, that also would not be convincing, and not very unlikely to occur by chance. But if the sample size was much larger, showing something like 15 out of 15 such matches, that would be compelling evidence for a real effect, being something very unlikely to occur by chance.  Similarly, experimental results in neuroscience papers should not persuade us when only four animals were used, or when 4 out of 15 or 4 out of 30 animals had some claimed effect. There is too big a chance that such results may be mere false alarms, the kind of matches or correlations that might be showing up merely by chance. When thousands of experimental neuroscientists are busily doing experiments and busily scanning data eagerly looking for correlations that can be interpreted as engram evidence, we would expect that very many false alarms would be popping up, particularly when too-small sample sizes were used such as only  four animals, or when low-percentage effects were claimed, such as 4 out of 15 or 4 out of 30. 

Once again, in the Marco paper we have a neuroscience study using mouse zapping.  Typically a study claiming engram evidence will shock a mouse,  and then later send some burst of energy or light to some cells where the scientists think the memory is stored. A claim will be made that this caused the mouse to freeze (in other word, not move) because the burst or energy of light has activated the fearful memory.  Such a methodology is laughable.  For one thing, it is hard to accurately measure the degree of freezing (non-movement) in a mouse, and judgments of a degree of freezing tend to be subjective. A measurement of heart rate (looking for a sudden spike) is a fairly reliable way to measure whether a fearful memory is being recalled, but such a technique is not used in such neuroscience studies. Also, if freezing behavior (non-movement) occurs, we have no way of knowing whether this is caused by a recall of a fearful memory, or whether it is an effect produced by the very burst of energy or light sent into the mouse's brain. It is known that there are many areas of a mouse's brain that if zapped will cause the mouse to show freezing behavior.   (The Marco paper uses the same unreliable technique of judging fear by trying to measure freezing behavior of mice, rather than the reliable technique of measuring heart rate spikes.)  One of quite a few reasons why trying to measure freezing behavior in mice is not a reliable way of determining fear is that fear typically produces in animals the opposite of freezing behavior: a fleeing behavior.  Over my long life I have very many times seen a mouse around my living quarters, but never, ever saw a mouse freeze when I walked near it (the mice always fled instead). 

In the MIT press release, we are told the scientists shocked some genetically modified mice, and that the mice then began to produce some protein marker. We have no way of knowing whether the production of such a protein marker had anything to do with an alleged formation of a memory in the brain. Organisms such as mice are forming new memories all the time, and also producing new proteins all the time. The formation of the protein could have been merely the result of the electrical shocking, not the formation of a new memory.  Or the protein could have formed simply because proteins are constantly forming in the brain, which replaces its proteins at a rate of about 3% per day (as discussed below). Electrically shocking an organism probably produces many a brain effect that has nothing to do with memory formation.  We can compare the brain during electrical shocking to a pin ball machine that lights up in many places at certain times. 

The MIT press release gives a quote by the post-doc researcher Marco that gives us a hint that he may be a bit on the wrong track. We read this:

“ 'The formation and preservation of memory is a very delicate and coordinated event that spreads over hours and days, and might be even months — we don’t know for sure,' Marco says. 'During this process, there are a few waves of gene expression and protein synthesis that make the connections between the neurons stronger and faster.' ”
 
It is utterly false that the formation of a memory requires "hours and days, and might be even months." To the contrary, we know that  a human being can form permanent new memories instantly.  If someone sexually assaults you or puts a gun in your mouth, you will instantly form a permanent memory of that event that will probably last the rest of your life.  But protein synthesis requires many minutes. The fact that humans can form permanent new memories instantly is one of the strongest reasons for rejecting all claims that memories are formed when engrams (new cells or new cell proteins) are produced.  The formation of neural engrams would necessarily take a length of time sufficient to prevent the instantaneous formation of permanent new memories. 

The ability of humans to form new memories in only three seconds was shown by a scientific experiment discussed in this post. 

We would take much, much longer to acquire new memories if the theory of engrams (neural memory storage) was correct.  Discussing the rate of translation (something that must occur during the synthesis of a new protein), the source here states, "It was found that the rate is quite constant across proteins and is about 6 amino acids per second."  A wikipedia.org article agrees, citing a speed of 6 to 9 amino acids per second. The average eukaryotic protein has a length of about 472 amino acids, according to this source.  Dividing 472 by 6, we are left with the conclusion that the synthesis of a new protein must take many minutes.  We cannot be forming new memories by some "engram creation" requiring the synthesis of new proteins, because we can acquire new memories instantly. 

engrams debunked
The 2018 paper here gives us a reason for rejecting all claims that memories are stored in brains. The paper finds that proteins in the human brain are replaced at a rate of about 3% to 4% per day. Unlike very many neuroscientists, who seem very skilled at ignoring the implications of their own findings, the authors actually seem to have a clue about the implications of their research. We read the following:

"Here we show that brain tissue turns over much faster at a rate of 3–4% per day. This would imply complete renewal of brain tissue proteins well within 4–5 weeks. From a physiological viewpoint this is astounding, as it provides us with a much greater framework for the capacity of brain tissue to recondition. Moreover, from a philosophical perspective these observations are even more surprising. If rapid protein turnover of brain tissue implies that all organic material is renewed, then all data internalized in that tissue are also prone to renewal. These findings spark (even) more debate on the interpretation and (long-term) storage of data in neural matter, the capacity of humans to consciously or unconsciously process data, and the (organic) basis of our own personality and ego." 

The authors rightly seem to be hesitating about whether there actually is an organic basis for our personality and ego Given a protein replacement rate of 3% per day in the brain, we would not be able to remember things for more than about 35 days if our memories were created as brain engrams.  

Postscript: This month the Science Daily site (which so often has hyped headlines not matching any robust research) has been showing a headline of "New Player in Long Term Memory."  The article is about a paper that suffers from the same problems as the paper discussed above.  The paper provides no real evidence for any physical effect in the brain causing memory consolidation.  Examining the paper, I find the same old problems that are found again and again and again in papers of this type, such as the following:

(1) Too-small study group sizes, with several being less than 8 animals per study group (15 is the minimum for a moderately reliable result).
(2) A study involving only mice, not humans.
(3) A use of an unreliable method for judging fear in animals (trying to measure the amount of time a mouse is "frozen" in fear), rather than use of a reliable fear-detection method such as measuring heart rate spikes. 
(4) Citations to other papers that suffered from the same type of problems.

Looking further at the Marco paper (which is behind a paywall, but kindly provided to me by a scientist), I see other methodological problems with it. For one thing, mouse brains were studied hours  after some foot-shocking of mice,  which means there wasn't any real-time matching between a memory creation event and something happening in a brain.  The paper also informs us that "blinding was not applied in the behavioral studies (CFC) and imaging acquisition because animals and samples need to be controlled by treatment or conditions."  Blinding is a very important procedural precaution to prevent biased data acquisition and biased analysis, and we should be suspicious of experimental studies that fail to thoroughly implement blinding protocols.  The paper also makes no claim to be a pre-registered study. When a study does not pre-register a hypothesis to be tested, the scientists running the study are free to go on a "fishing expedition" looking in countless places for some type of association or correlation; and in such cases there is a large chance of false alarms occurring. 

Monday, September 28, 2020

Raven Smarts Defy Prevailing Brain Dogmas

 Professors who lack any understanding of how a brain could produce intelligence like to use localization claims to try to impress us. When a localization claim occurs, a professor will try to impress us with his understanding by claiming that some particular mental function comes from some particular part of a brain.  After hearing such claims, someone might say, "These guys may not the how of cognition, but at least they know the where."  But such localization claims do not hold up well to scrutiny. 

One of the main localization claims that has long been made by neuroscientists is a claim that thought or decision making come from the front top part of the brain, the prefrontal cortex. In my post here I cite many neuroscience papers giving evidence that conflicts with such a claim.  For example, the scientific paper here tells us that patients with prefrontal damage "often have a remarkable absence of intellectual impairment, as measured by conventional IQ tests." The paper here tested IQ for 156 Vietnam veterans who had undergone frontal lobe brain injury during combat. If you do the math using Figure 5 in this paper, you get an average IQ of 98, only two points lower than average. You could plausibly explain that 2 point difference purely by assuming that those who got injured had a very slightly lower average intelligence (a plausible assumption given that smarter people would be more likely to have smart behavior reducing their chance of injury). Similarly, this study checked the IQ of 7 patients with prefrontal cortex damage, and found that they had an average IQ of 101.

Claims that thought comes from the prefrontal cortex have always been inconsistent with the observational reality that certain birds behave with a rather keen intelligence, despite a lack of any cerebral cortex. An article on Aeon mentions how there is little correlation between brain size and intelligence, or a correlation between intelligence and the size of a frontal cortex. The article states the following:

"Some of the most perspicacious animals are the corvids – crows, ravens, and rooks – which have brains less than 1 per cent the size of a human brain, but still perform feats of cognition comparable to chimpanzees and gorillas. Behavioural studies have shown that these birds can make and use tools, and recognise people on the street, feats that even many primates are not known to achieve."



An article on the Science Daily site states the following:

"Some birds are capable of astonishing cognitive performances to rival those of higher developed mammals such as primates. For example, ravens recognise themselves in the mirror and plan for the future. They are also able to put themselves in the position of others, recognise causalities and draw conclusions. Pigeons can learn English spelling up to the level of six-year-old children."

There are two separate reasons why the cognitive abilities of ravens, crows and rooks argue against prevailing brain dogmas:  

(1) According to prevailing brain dogmas, animals such as ravens with so tiny a brain should not be anywhere near as smart as they are.
(2) According to prevailing brain dogmas, animals such as ravens with no brain cortex should not be anywhere near as smart as they are. 

In a recent "perspective" article in the journal Science,  a scientist makes a very strange attempt to get us to believe that crows have a cortex. The opinion piece is entitled, "Birds do have a brain cortex -- and think."  The author states that "birds, and particularly corvids (such as ravens), are as cognitively capable as monkeys and even great apes."  Using a tricky choice of words that might fool the average reader into thinking that some birds have more neurons than creatures such as humans, the author states, "Because their neurons are smaller, the pallium of songbirds and parrots actually comprises many more information-processing neuronal units than the equivalent-sized mammalian cortices."  Do not be fooled by this language.  The wikipedia.org page here lists the number of neurons in rooks, ravens and parrots as about  1 or 2 billion, and the number of neurons in a human as 86 billion. So humans have more than forty times more neurons than animals such as ravens and parrots. 

The author's attempt to argue that birds have a cortex is not persuasive.  Referrring to a part of the bird brain called the pallium, she states, "Birds do have a cerebral cortex, in the sense that both their pallium and the mammalian counterpart are enormous neuronal populations derived from the same dorsal half of the second neuromere in neural tube development."  But that's rather like saying that your ten-year-old owns an automobile, in the sense that a bicycle is a wheeled transportation vehicle capable of moving fast like an automobile.  The cortex is defined as a distinctive layer of cells on the outside edge of a brain.  Birds do not have such a distinctive layer of cells on the outside edge of their brains. So the very many scientists who have stated that birds do not have a cerebral cortex have spoken correctly. 

The author attempts to persuade us that the pallium of a bird's brain is kind of like a cortex, by making this dubious claim: "Nieder et al. show that the bird pallium has neurons that represent what it perceives—a hallmark of consciousness."  While we have good reason to think that the smarter birds such as ravens are conscious, there is no good evidence that any neurons of any organism represent something that the organism perceived.  When we look at the reference to the paper by Nieder and his colleagues, we find that it tested only two animals. 15 animals per study group is the minimum for a moderately reliable neuroscience experimental research paper. 

Another article in the journal Science is just as silly as the one I just discussed.  The article is entitled "Newfound brain structure explains why some birds are so smart—and maybe even self-aware." The article contradicts the other Science article by referring to a lack of a neocortex in birds.  The article refers to a paper by Onur Güntürkün and others that obscurely refers to "hitherto unknown neuroarchitecture of the avian sensory forebrain that is composed of iteratively organized canonical circuits within tangentially organized lamina-like and orthogonally positioned column-like entities."

Another article quotes this Onur Güntürkün speaking rather more clearly:

" 'Here, too, the structure was shown to consist of columns, in which signals are transmitted from top to bottom and vice versa, and long horizontal fibres,'  explains Onur Güntürkün. However, this structure is only found in the sensory areas of the avian brain. Other areas, such as associative areas, are organised in a different way."

Of course, the mere existence of such column-like structures does nothing at all to explain the smarts of birds like ravens, particularly since such structures are found only in sensory areas.  There is no possible physical arrangement of neurons that would do anything at all to explain anything like intelligence in any organism. So the  Science article headline claiming that  "newfound brain structure explains why some birds are so smart" is baloney. 

Postscript: A new scientific paper states that despite having tiny brains, mouse lemurs perform pretty much as well as primates with brains hundreds of times larger:

"Using a comprehensive standardized test series of cognitive experiments, the so-called 'Primate Cognition Test Battery' (PCTB), small children, great apes as well as baboons and macaques have already been tested for their cognitive abilities in the physical and social domain...For the first time, researchers of the 'Behavioral Ecology and Sociobiology Unit' of the DPZ have now tested three lemur species with the PCTB...The results of the new study show that despite their smaller brains lemurs' average cognitive performance in the tests of the PCTB was not fundamentally different from the performances of the other primate species. This is even true for mouse lemurs, which have brains about 200 times smaller than those of chimpanzees and orangutans. Only in tests examining spatial reasoning primate species with larger brains performed better. However, no systematic differences in species performances were ...found for the understanding of causal and numerical relationships nor in tests of the social domain."