Wednesday, January 12, 2022

No, a USC Team Did Not Show "How Memories Are Stored in the Brain"

The EurekAlert site at is yet another "science news" site that seems to just pass on press releases coming from university press offices.  Nowadays university press offices are not a very reliable source of information, as they tend to display all kinds of "local bias" in which the work of researchers at the university gets some adulatory treatment it does not deserve. University press offices often make grandiose claims about research done by professors at their university, fawning or hype-filled claims that are often unwarranted.  The press releases from university press offices often make unimportant or dubious research sound as if it was some type of important breakthrough. 

The EurekAlert site says that it is "a service of the American Association for the Advancement of Science." That makes it sound like we would be getting some kind of "official science news" or at least news of better-than-average reliability. But very strangely at the bottom of each news story on the site, we read this notice: "Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system."  That basically means that we should not trust any headlines we read merely because they appear on the EurekAlert site.  At the post here I discuss various untrue headlines that appeared on the EurekAlert site.

The latest untrue headline to appear at the EurekAlert site is a headline from two days ago, one which stated "USC team shows how memories are stored in the brain, with potential impact on conditions like PTSD." Nothing of the sort occurred.  All that happened was that some scientists tracked some new synapses being created and an equal number of synapses being lost after some tiny zebrafish learned something. 

We read text in the story that contradicts the story's headline:

"They made the groundbreaking discovery that learning causes synapses, the connections between neurons, to proliferate in some areas and disappear in others rather than merely changing their strength, as commonly thought. These changes in synapses may help explain how memories are formed and why certain kinds of memories are stronger than others."

Notice the contradiction. The headline claimed that the team had showed how memories are stored in the brain. But the text of the story merely makes the much weaker claim that the type of thing observed "may help explain how memories are formed." 

The quotation above is not even an accurate description of what was observed.  The scientists did not find that synapses "proliferate in some areas and disappear in others."  Instead what was observed in each area of the zebrafish brain studied was a roughly equal number of gains of synapses and losses of synapses. Below is one of the visuals from the paper (from the page here and the site here). It shows synapses losses and gains in only one tiny part of the zebrafishes brain during a small time period. Notice the blue dots (representing synapse losses) are roughly as common as the yellow dots (representing synapse gains). 

Data results such as this are best interpreted under the hypothesis that we are merely seeing random losses and gains of synapses that continually occur, and that the result has nothing to do with anything being learned.  It has long been known that synapses are short-lived things.  The paper here states, "Experiments indicate in absence of activity average life times ranging from minutes for immature synapses to two months for mature ones with large weights."  Synapses randomly appear and disappear, just as pimples randomly appear and disappear on the face of a teenager with a bad case of acne. 

Zebrafish have only about 100,000 neurons, and there are perhaps 1000 synapses for every neuron. That makes very roughly about 100 million synapses in the zebrafish brain. Given synapses that have average lifetimes of no greater than a few months, we would expect that every hour about 100,000 synapses in the zebrafish brain would randomly be lost or would randomly appear.  The synapse loss and gain shown in the USC data is about what we would expect under such numbers. The visual shows hundreds of synapse losses and gains, but this visual only maps such losses and gains in a tiny portion of the zebrafish brain. 

The type of learning tested on the zebrafish was something called tail-flick conditioning or TFC. At the link here we are told this:

"The total numbers of synapses before TFC are not significantly different among the different groups: superlative learner (L, N=11 fish), partial learner (PL, N = 6), nonlearner (NL, N=11), US only (N=11), NS (N=11), and CS only (N=11) (p > 0.3, Kruskal Wallis).
B The total numbers of synapses after TFC are not significantly different among the different groups (p > 0.3, Kruskal Wallis)."

So there was no increase in synapses for the zebrafish who learned something (the L and PL groups) compared to the zebrafish who did not learn anything (the NL group).  The study has not produced any evidence that learning or memory formation produces an increase in synapses.  

The study also failed to support the widely-made claim that synapses strengthen during memory formation or learning. In the EurekAlert story we read this:

" 'For the last 40 years the common wisdom was that you learn by changing the strength of the synapses,' said Kesselman, who also serves as director of the Informatics Division at the USC Information Sciences Institute and is a professor with the Daniel J. Epstein Department of Industrial and Systems Engineering, 'but that’s not what we found in this case.' ” 

Oops, it sounds like neuroscientists have been telling us baloney for the past 40 years by trying to claim that memories are formed by synapse strengthening (an idea that never made any sense, because information is never stored by a mere strengthening of something). The USC scientists have not presented anything that can serve as a credible substitute narrative.  "Synapses being lost at the same rate as synapses being gained" makes no sense as a narrative of how memories could be stored, just as "words being written at the same rate as words being erased" makes no sense as a description of how someone could write a book using pencil and paper. 

By visually diagramming the high turnover rate of synapses, and by reminding us of the short lifetimes and rapid turnover of synapses, what the USC study really has done is to highlight a major reason for rejecting all claims that human memories are stored in synapses.  Synapses only last for days, weeks or months, not years; and the proteins that make up synapses have average lifetimes of only a few weeks or less. But human memories often last for 50 years or more.  It makes no sense to believe that human memories that can last for 50 years are stored in synapses which last a few months at best, and which internally are subject to constant remodeling and restructuring because of the short lifetimes of synapse proteins. 

In an article he wrote at the site The Conversation, USC scientist Dan Arnold describes his own results in a give-you-the-wrong-idea way, stating the following: "When we compared the 3D synapse maps before and after memory formation, we found that neurons in one brain region, the anterolateral dorsal pallium, developed new synapses while neurons predominantly in a second region, the anteromedial dorsal pallium, lost synapses."  The results (as shown here) were actually that both regions lost and gained synapses at roughly equal rates. Arnold confesses, "It’s still unknown whether synapse generation and loss actually drive memory formation." 

So why is it that the press release for this work contained the untrue headline "USC team shows how memories are stored in the brain, with potential impact on conditions like PTSD"? Why is it that scientists so very often allow very untrue press releases about their work to be issued by the press offices of their universities, press releases that are making claims that are not supported by their work, and claims that often contradict the statements of such scientists? Is it because scientists are willing to condone lying hype to appear about their work, for the sake of getting more of the paper citations that scientists so much long for (the paper citation count being for a scientist being a number as important as a baseball player's batting average)? 

A particularly pathetic aspect of the phony press release headline is that this counting of synapse losses and synapses gains in tiny zebrafish has a "potential impact on conditions like PTSD."  Such research has no relevance to humans with post-traumatic stress syndome, and the claim that it does is as phony as the claim that the study "shows how memories are stored." 

Wednesday, January 5, 2022

Suspect Shenanigans When You Hear Claims of "Mind Reading" Technology

 The New Yorker recently published an extremely misleading article with a title of "The Science of Mind Reading," and with a subtitle of "Researchers are pursuing age-old questions about the nature of thoughts—and learning how to read them." The article (not written by a neuroscience scholar) provides no actual evidence that anyone is making progress trying to read thoughts from a brain. 

The article starts out with a dramatic-sounding but extremely dubious narrative. We hear of experts trying to achieve communication with a Patient 23 who was assumed to be in a "vegetative state" after a bad injury five years ago.  We read about the experts asking questions while scanning the patient's brain.  They were looking for some brain signals that could be interpreted as a "yes" answer or "no" answer.  We are told: "They would pose a question and tell him that he could signal 'yes' by imagining playing tennis, or 'no' by thinking about walking around his house." 

We get this narrative (I will put unwarranted and probably untrue statements in boldface):

"Then he asked the first question: 'Is your father’s name Alexander''

The man’s premotor cortex lit up. He was thinking about tennis—yes.

'Is your father’s name Thomas?'

Activity in the parahippocampal gyrus. He was imagining walking around his house—no.

'Do you have any brothers?'


'Do you have any sisters?'


Constantly foisted upon us by scientists and science writers, the claim that particular regions of the brain "light up" under brain scanning is untrue. Such claims are visually enforced by extremely deceptive visuals in which tiny differences of less than 1 percent are shown in bright red, thereby causing people to think the very slight differences are major differences. The truth is that all brain regions are active all the time. When a brain is scanned, there are only tiny signal differences that show up in a brain scan.  Typically the differences will be no greater than about half of one percent, smaller than 1 part in 200.  When scanning a brain, you can always see dozens of little areas that have a very slightly greater activity, and there is no reason to think that such variations are anything more than very slight chance variations. Similarly, if you were to analyze the blood flow in someone's foot, you would find random small variations in blood flow between different regions, with differences of about 1 part in 200. 

Because of such random variations, there would never be any warrant for claiming that a person was thinking about a particular thing based on small fluctuations in brain activity. At any moment there might for random reasons be 100 different little areas in the brain that had 1 part in 200 greater activity, and 100 other different little areas in the brain that might have 1 part in 200 less activity.  In this case no evidence has been provided of any ability to read thoughts of a person supposed to be in a vegetative state. We cannot reliably distinguish any signal from the noise. 

The New Yorker article describing the case above refers us to a Los Angeles Times article entitled "Brains of Vegetative Patients Show Signs of Life." The article gives us no good evidence that thoughts were read from this patient 23. The article merely mentions that 54 patients in a vegetative state had their brains scanned, and that one of them (patient 23) seemed "several times" to answer "yes" or "no" correctly, based on examining fluctuations of brain activity.  Given random variations in brain activity, you would expect to get such a result by chance if you scanned 54 patients who were completely unconscious. So no evidence of either consciousness or thought reading has been provided.  

A look at the corresponding scientific paper  shows that the fluctuations in brain activity were no more than about a half of one percent. No paper like this should be taken seriously unless the authors followed a rigorous blinding protocol, but the paper makes no mention of any blinding protocol being followed.  Under a blinding protocol, anyone looking for signs of a "yes" or "no" answer would not know whether a "yes" answer was the correct answer.  The paper provides no actual evidence either of thought reading by brain scanning or even of detection of consciousness. We merely have tiny 1-part-in-200 signal variations of a type we would expect to get by chance from scanning one or more of 54 patients who are all unconscious.  

The paper tells that six questions were asked, and the authors seemed impressed that one of the 54 patients seemed to them to answer all six questions correctly (by means of brain fluctuations that the authors are subjectively interpreting).  The probablility of getting six correct answers to yes-or-no questions by a chance method such as coin-flipping is 1 in two-to-the-sixth-power, or 1 in 64.  So it is not very unlikely at all that you would get one such result testing 54 patients, purely by chance, even if all of the patients were unconscious and none of them understood the instructions they were given.  

The New Yorker article then introduces Princeton scientist Ken  Norman, incorrectly describing him as "an expert on thought decoding." Because no progress has been made on decoding thoughts from studying brains, no one should be described as an expert on such a thing. The article then gives us a very misleading passage trying to suggest that scientists are making some progress in understanding how a brain could produce or represent thoughts:

"Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense 'meaning space.' They could see how these points were interrelated and encoded by neurons." 

To the contrary, no neuroscientist has the slightest idea of how thoughts could be encoded by neurons, nor have neuroscientists  discovered any evidence that any neurons encode thoughts. It is nonsensical to claim that thoughts can be compared to points in three-dimensional space. Points in three-dimensional space are simple 3-number coordinates, but thoughts can be vastly more complicated. If I have the thought that I would love to be lounging on a beach during sunset while sipping lemonade, there is no way to express that thought as three-dimensional coordinates. 

We then read about some experiment:

"Norman invited me to watch an experiment in thought decoding. A postdoctoral student named Manoj Kumar led us into a locked basement lab at P.N.I., where a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest. 'We want to get the brain patterns that are associated with different subclasses of scenes,' Norman said." 

But then the article goes into a long historical digression, and we never learn of what the result is from this experiment. Norman is often mentioned, but we hear no mention of any convincing work he has done on this topic. Inaccurately described as "thought decoding," the attempt described above is merely an attempt to pick up signs in the brain of visual perception. Seeing something is not thinking about it. Most of the alleged examples of high-tech "mind reading" are merely claimed examples of picking up traces of vision by looking at brains -- examples that are not properly called "mind reading" (a term that implies reading someone's thoughts).

We hear a long discussion often mentioning Ken Norman, but failing to prevent any good evidence of high-tech mind reading. We read this claim about brain imaging: "The scripts and the scenes were real—it was possible to detect them with a machine." But the writer presents no evidence to back up such a claim. 

Norman is a champion of a very dubious analytical technique called multi-voxel pattern analysis (MVPA), and seems to think such a technique may help read thoughts from the brain. A paper points out problems with such a technique:

"MVPA does not provide a reliable guide to what information is being used by the brain during cognitive tasks, nor where that information is. This is due in part to inherent run to run variability in the decision space generated by the classifier, but there are also several other issues, discussed here, that make inference from the characteristics of the learned models to relevant brain activity deeply problematic." 

In a paper, Norman claims "This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading."  Looking up two of the papers cited in support of this claim, I see that only four subjects were used in each study.  Looking up another of the studies cited in support of this claim, I find that only five subjects were used for the experiment cited. This means none of these studies provided robust evidence (15 subjects per study group being the minimum for a moderately reliable result). This is what goes on massively in neuroscience papers: authors making claims that other papers showed some thing that the papers did not actually show, because poor methodology (usually including way-too-small sample sizes) occurred in the cited studies.   

The New Yorker article then discusses a neuroscientist named Jack Gallant, stating the following: "Jack Gallant, a professor at Berkeley who has used thought decoding to reconstruct video montages from brain scans—as you watch a video in the scanner, the system pulls up frames from similar YouTube clips, based only on your voxel patterns—suggested that one group of people interested in decoding were Silicon Valley investors."  Gallant has produced a clip entitled "Movie Reconstruction from Human Brain Activity."

On the left side of the video we see some visual images. On the right side of the video we see some blurry images entitled "Clip reconstructed from brain activity."  We are left with the impression that scientists have somehow been able to get "movies in the mind" by scanning brains. 

However, such an impression is very misleading, and what is going on smells like smoke and mirrors shenanigans.  The text below the video explains the funky technique used.  The videos entitled "clip reconstructed from brain activity" were produced through some extremely elaborate algorithm that mainly used inputs other than brain activity. Here is the description of the technique used:

"[1] Record brain activity while the subject watches several hours of movie trailers. [2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured....[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions. [4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction."

This bizarre and very complicated rigmarole is some very elaborate scheme in which brain activity is only one of the inputs, and the main inputs are lots of footage from Youtube videos.  It is very misleading to identify the videos as "clip reconstructed from brain activity," as the clips are mainly constructed from data other than brain activity. No actual evidence has been produced that someone detected anything like "movies in the brain." It seems like merely smoke and mirrors under which some output from a variety of sources (produced by a ridiculously complicated process) is being passed off as something like "movies in the brain." 

Similar types of extremely dubious convoluted methods seem to be going on in the papers here co-authored by Gallant:

In both of these papers, we have a kind of byzantine methodology in which bizarre visual montages or artificial video clips are constructed. For example, the second paper resorts to "an averaged high posterior (AHP) reconstruction by averaging the 100 clips in the sampled natural movie prior that had the highest posterior probability." The claim made by the New Yorker -- that Gallant has "used thought decoding to reconstruct video montages from brain scans" is incorrect. Instead, Gallant is constructing visual montages using some extremely elaborate and hard-to-justify methodology (the opposite of straightforward), and brain scans are merely one of many inputs from which such montages are constructed.  This is no evidence of technology reading thoughts or imagery from brains.  In both of the papers above, only three subjects were used. 15 subjects per study group is the minimum for a moderately compelling experimental result. And since neither paper uses a blinding protocol, the papers fail to provide robust evidence of anything. 

The rest of the New Yorker article is mainly something along the lines of "well, if we've made this much progress, what wonderful things may be on the horizon?" But no robust evidence has been provided that any progress has been made in reading thoughts or mental imagery from brains. The author has spent quite a while interviewing and walking around with scientist Ken Norman, and has accepted "hook, line and sinker" all the claims Norman has made, without asking any tough questions, and without critically analyzing the lack of evidence behind his more doubtful claims and the dubious character of the methodologies involved. The article is written by a freelance writer who has written on a very wide variety of topics, and who shows no signs of being a scholar of neuroscience or the brain or philosophy of mind issues.  

There are no strong neural correlates of either thinking or recall. As discussed here, brain scan studies looking for neural correlates of thinking or recall find only very small differences in brain activity, typically smaller than 1 part in 200. Such differences are what we would expect to see from chance variations, even if a brain does not produce thinking and does not produce recall.  The chart below illustrates the point. 

neural correlates of thinking

What typically goes on in some study claiming to find some neural correlate of thinking or recall is professor pareidolia. Pareidolia is when someone hoping to find some pattern reports a pattern that isn't really there, like someone eagerly scanning his toast each day for  years until he finally reports finding something that looks to him like the face of Jesus. A professor examining brain scans and eagerly hoping to find some neural signature or correlate of thinking or recall may be as prone to pareidolia as some person scanning the clouds each day eagerly hoping to find some shape that looks like an angel. 

There are ways for scientists to help minimize the chance that they are reporting patterns because of pareidolia. One way is the application of a rigorous blinding protocol throughout an experiment. Another way is to use adequate sample sizes such as 15 or 30 subjects per study group. Most neuroscience experiments fail to follow such standards. The shockingly bad tendencies of many  experimental biologists was recently revealed by a replication project that found a pitifully low replication rate and other severe problems in a group of biology experiments chosen to be replicated. 

Friday, December 31, 2021

NSF Grant Tool Query Suggests Engrams Are Not Really Science

An engram is a hypothetical spot in the brain where there is alleged to be a memory trace, an alteration in brain matter caused by the storage of memory. While scientists have claimed that there are countless engrams in your head, the notion of an engram has no robust scientific evidence behind it. No robust evidence for engrams has been found in any organism. Every study that has claimed to provide evidence for the existence of an engram has had problems that should cause us to doubt that good evidence for engrams was provided. 

In a previous post I pointed out the not-really-science status of engrams by doing some queries on major preprint servers that store millions of scientific papers, servers such as the physics paper preprint server (which includes quanitative biology papers), the biology preprint server and the psychology preprint server.  The queries (searching for papers that used the word "engram") showed only the faintest trace of scientific papers mentioning "engrams" in their titles.  Only a handful of papers used that word in their title. An examination of such papers (discussed in my post) showed they provided nothing like any substantial evidence for the existence of any such thing as an engram. 

There is another way of testing whether this concept of engrams has any real observational support. We can use the grant search tool of the National Science Foundation. The National Science Foundation is a US institution that doles out billions of dollars each year in grants for scientific research.  You can use the NSF's grant query tool to find out how much research money is being allocated to research particular topics. 

You can perform the search by using the URL below:

The results we get are the results shown below.  We get only 3 matches. The last match is some climate paper having nothing to do with memory.  So our search has produced only two National Science Foundation grants relating to the topic of engrams.

Engram query

The second project ("Functional Dissociation Within the Hippocampal Formation: Learning and Memory") completed in 1992. Clicking on the link to the project, we see that $163,000 was spent, but no scientific papers are listed as resulting from the project. 

The first grant is a grant of $996,778.00 (nearly one million dollars) that was given to a project entitled "Dendritic spine mechano-biology and the process of memory formation." The project started in 2017 and has a listed end date of July, 2022.  The project description gives us a statement of speculative dogma regarding memory storage.  There are actually very good reasons why the speculations cannot be correct. Below is the statement from the project description:

"The initiation of learning begins with changes at neuronal synapses that can strengthen (or weaken) the response of the synapse. This process is termed synaptic plasticity. Stimuli that produce learning lead to structural changes of the post-synaptic dendritic spine. The initial events of memory and learning include a temporary rise in calcium concentrations and activation of a protein called calmodulin. The next step is activation of calmodulin-dependent enzyme, kinase II (CaMKII). At the same time, structural rearrangements occur in the actin cytoskeleton leading to an enlargement of the spine compartment. How these initial events lead to remodeling of the actin cytoskeleton is largely unknown. This project focuses on the events that lead to the changes in actin cytoskeleton. The research also addresses the question of how these structural changes in the actin cytoskeleton are used to maintain memory."

To see why the main parts of the statement are not well-founded in observations, let us consider dendritic spines. A dendritic spine is a tiny protrusion from one of the dendrites of a neuron. The diagram below shows a neuron in the top half of the diagram. Some dendritic spines are shown in the bottom half of the visual. The bottom half of the visual is a closeup of the red-circled part in the top of the diagram. 

An individual neuron in the brain may have about a thousand such dendritic spines. The total number of dendritic spines in the brain has been estimated at 100 trillion, which is about a thousand times greater than the number of neurons in the brain.  The total number of synapses in the brain has also been estimated at 100 trillion. A large fraction of synapses are connected to dendritic spines. 

Now, given such a high number of dendritic spines and synapses, we have the interesting situation that there is no possibility of correlating the learning of something and a strengthening of synapses or a strengthening or enlarging or growth of dendritic spines. Even if we are testing only a mouse, we still have an animal with trillions of dendritic spines and trillions of synapses. Scientists are absolutely unable to measure the size, strength or growth of all of those dendritic spines and synapses.  The technology for doing that simply does not exist.  What scientists can do is inspect a very small number of dendritic spines, taking snapshots of their physical state.  But no such inspection would ever allow you to conclude that one or more dendritic spines had increased in size or grown or strengthened because some learning had occurred. Since dendritic spines slowly increase and decrease in size in an apparently random fashion, there is no way to tell whether the increase or decrease of a dendritic spine (or a small number of such spines) is being caused by learning or by the formation of a memory. 

Therefore the statements below (quoted above) cannot be well-founded:

"The initiation of learning begins with changes at neuronal synapses that can strengthen (or weaken) the response of the synapse. This process is termed synaptic plasticity. Stimuli that produce learning lead to structural changes of the post-synaptic dendritic spine."

In fact, we know of the strongest reason why the hypothesis underlying such a claim cannot be true. The reason is that human memories are often extremely stable and long lasting, while dendritic spines and synapses are unstable, fluctuating things that have typical lifetimes of a few months or weeks.  Read here to find some papers supporting such a claim.  I can quote some scientists (Emilio Bizzi and Robert Ajemian) on this topic:

"If we believe that memories are made of patterns of synaptic connections sculpted by experience, and if we know, behaviorally, that motor memories last a lifetime, then how can we explain the fact that individual synaptic spines are constantly turning over and that aggregate synaptic strengths are constantly fluctuating? How can the memories outlast their putative constitutive components?"

The word "outlast" is a huge understatement here, for the fact is that human memories such as 50-year-old memories last very many times longer than the maximum lifetime of dendritic spines and synapses, and such memories last 1000 times longer than the protein molecules that make up such spines and synapses (which have average lifetimes of only a few weeks or less). 

But enough of this long disputation of the claims made in the project description of the project entitled "Dendritic spine mechano-biology and the process of memory formation."  Now let's look at what the million-dollar project has published so far in the way of results. We can see that by going to this page looking at the section entitled "PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH." The last three of these papers do not mention memory or engrams, so we may assume that they did nothing to substantiate claims about neural storage places of memory (engrams).  The only paper mentioning memory or engrams in its title is a paper entitled "Exploring the F-actin/CPEB3 interaction and its possible role in the molecular mechanism of long-term memory." The paper can be read in full here

The paper does not do anything to substantiate claims that memories are stored in engrams in the brain.  The paper merely presents a speculative chemistry model and some speculative computer simulations.  No experiments with animals have been done, and no research on human brains has been done. Apparently, there were no lab experiments of any type done, with all of the "experimentation" going on inside computers. The computer simulations do not involve the biochemical storage or preservation of any learned information.  The quotes below help show the wildly speculative nature of the paper (I have put in boldface words indicating that speculation is occurring). 

"Here we study the interaction between actin and CPEB3 and propose a molecular model for the complex structure of CPEB3 bound to an actin filament... Our model of the CPEB3/F-actin interaction suggests that F-actin potentially triggers the aggregation-prone structural transition of a short CPEB3 sequence....The CPEB/F-actin interaction could provide the mechanical force necessary to induce a structural transition of CPEB oligomers from a coiled-coil form into a beta-sheet–containing amyloid-like fiber...This beta-hairpin acts as a catalyst for forming intramolecular beta-sheets and could thereby help trigger the aggregation of CPEB3....These beta-sheets could, in turn, participate in further intermolecular interactions with free CPEB3 monomers, triggering a cascade of aggregation....Several possible mechanisms by which SUMOylation could regulate the CPEB3/F-actin interaction are discussed in SI Appendix....
We propose that SUMOylation of CPEB3 in its basal state might repress the CPEB3/F-actin interaction....Furthermore, the beta-hairpin form of the zipper suggests that it might be able to trigger extensive beta-sheet formation in the N-terminal prion domain, PRD....The beta hairpin form of zipper sequence is a potential core for the formation of intramolecular beta sheets... The maintenance of the actin cytoskeleton and synaptic strength then might involve the competition between CPEB3 and cofilin or other ABPs....We also propose that the CPEB3/F-actin interaction might be regulated by the SUMOylation of CPEB3, based on bioinformatic searches for potential SUMOylation sites as well as SUMO interacting motifs in CPEB3....We therefore propose that SUMOylation of CPEB3 is a potential inhibitor for the CPEB3/F-actin interaction."

The wildly speculative nature of the paper is shown by the boldface words above, and by the sentence at the end of the paper's long "Results" section: "Further experimental and theoretical work is required to determine which, if any, of these mechanisms is operating in neurons."  Note well the phrase "which, if any" here. This is a confession that the authors are not sure a single one of the imagined effects actually occur in a brain. 

In this case the US government paid a million dollars for essentially a big bucket of "mights" and "coulds," and the authors do not seem confident that any of these speculative effects actually occur in the brain.  Whatever is going on here, it doesn't sound like science with a capital S (which I define as facts established by observations or experiments).  Even if all of the wildly speculative "mights" and "coulds" were true, it still would not do a thing to show that memories lasting fifty years can be stored in dendritic spines and synapses that do not last for years, and are made up of proteins that have average lifetimes of only a few weeks. The idea that changes in synapse strength can store complex learned information has never made any sense.  Information is physically stored not by some mere strengthening but by when some type of coding system is used to write information using tokens of representation. Never does a mere strengthening store information.  The idea that you store memories by synapse strengthening makes no more sense than the idea that you learn school lessons by strengthening your arm muscles.  If memories were stored as differences in synapse strengths, you could never recall such memories: because the brain lacks any such thing as a synapse strength reader. 

Wednesday, December 22, 2021

A New Paper Reminds Us Neuroscientists Can't Get Their Story Straight About Memory Storage

There is a new scientific paper with the inappropriate title "Where is Memory Information Stored in the Brain?" This is not the question we should be asking. The question we should be asking is: "Is memory information stored in the brain?"  Although it was probably not the intention of the authors (James Tee and Desmond P. Taylor), what we get in the paper is a portrait of how neuroscientists are floundering around on this topic, like some poor shark that is left struggling in the sand after going after its prey too aggressively. 

Tee and Taylor claim this on page 5: "Based on his discovery of the synapse as the physiological basis of memory storage, Kandel was awarded the year 2000 Nobel Prize in Physiology or Medicine (Nobel Prize, 2000)." This is a misstatement about a very important topic. The Nobel Prize listing for Kandel does not mention memory. The official page listing the year 2000 Nobel Prize for physiology states only the following: "The Nobel Prize in Physiology or Medicine 2000 was awarded jointly to Arvid Carlsson, Paul Greengard and Eric R. Kandel 'for their discoveries concerning signal transduction in the nervous system.' " The Nobel committee did not make any claim that synapses had been discovered as the basis of memory. 

Before making this claim about the Nobel Prize, Tee and Taylor  state something that makes no sense. They state, "The groundbreaking work on how memory is (believed to be) stored in the human brain was performed by the research laboratory of Eric R. Kandel on the sea slug Aplysia (Kupfermann et al., 1970; Pinsker et al., 1970)." How could research on a tiny sea slug tell us how human beings store memories?  The paper in question can be read here. The paper fails to mention a testing of more than a single animal, thereby strongly violating rules of robust experimental research on animals (under which an effect should not be claimed unless at least 15 subjects were tested).  We have no reliable evidence about memory storage from this paper. If the paper somehow led to its authors getting a Nobel Prize, that may have been a careless accolade.  The Nobel Prize committee is pretty good about awarding prizes only to the well-deserved, but it may occasionally fall under the gravitational influence of scientists boasting about some "breakthrough" that was not really any such thing. 

Equally undeserving of a Nobel Prize was the next research discussed by our new paper on memory storage: research claiming a discovery of "place cells" in the hippocampus. John O' Keefe published a paper in 1976 claiming to detect "place units" in the hippocampus of rats. The paper also used the term "place cells."  The claim was that certain cells were more active when a rat was in a certain spatial position. The paper did not meet standards of good experimental science. For one thing, the study group sizes it used were way too small for a robust evidence to have produced.  One of the study group sizes consisted of only five rats, and another study group size consisted of only four rats.  15 animals per study group is the minimum for a moderately convincing result.  For another thing no blinding protocol was used. And the study was not a pre-registered study, but was apparently one of those studies in which an analyst is free to fish for whatever effect he may feel like finding after data has been collected. 

The visuals in the study compare wavy signal lines collected while a rat was in different areas of an enclosed unit. The wavy signal lines look pretty much the same no matter which area the rats were in. But O'Keefe claims to have found differences.  No one should be persuaded that the paper shows robust evidence for an important real effect.  We should suspect that the analyst has looked for stretches of wavy lines that looked different when the rat was in different areas, and chosen stretches of wavy lines that best-supported his claim that some cells were more active when the rats were in different areas.  Similar Questionable Research Practices (with similar too-small study groups such as four rats) can be seen in O'Keefe's 1978 paper here

Although O'Keefe's 1976 paper and 1978 paper were not at all a robust demonstration of any important effect, the myth that "place cells" had been discovered started to spread around among neuroscience professors.  O'Keefe even got a Nobel Prize. The Nobel Prize committee is normally pretty good about awarding prizes only when an important discovery has been made for which there was very good evidence. Awarding O'Keefe a Nobel Prize for his unconvincing work on supposed "place cells" seems like another flub of the normally trusty Nobel Prize committee. Even if certain cells are more active when rats are in certain positions (something we would always expect to observe from chance variations), that does nothing to show that there is anything like a map of spatial locations in the brain of rats. 

On page 7 of the new paper on memory storage, we have a discussion of equally unconvincing results:

"LeDoux found that this conditioned fear resulted in LTP (strengthening of synapses) in the auditory neurons of the amygdala, to which he concluded that the LTP constituted memory of the conditioned fear. That is, memory was stored by way of strengthening the synapses, as hypothesized by Hebb."

You may understand why this is nothing like convincing evidence when you realize that synapses are constantly undergoing random changes. At any moment billions of synapses may be weakening, and billions of other synapses may be strengthening.  So finding some strengthening of synapses is no evidence of memory formation. It is merely finding what goes on constantly in the brain, with weakening of synapses occurring just as often as strengthening. The new paper on memory storage confesses this when it says on page 8 that: "synapses in the brain are constantly changing, in part due to the inevitable existence of noise." 

On pages 8-9 of the new paper, Tee and Taylor say that scientists had hopes that there would be breakthroughs in handling memory problems by studying synapses, but that "the long-awaited breakthroughs have yet to be found, raising some doubts against Hebb’s synaptic [memory storage] hypothesis and the subsequent associated experimental findings." Tee and Taylor give us on page 9 a quotation from two other scientists, one that gives a great reason for rejecting theories of synaptic memory storage:

"If we believe that memories are made of patterns of synaptic connections sculpted by experience, and if we know, behaviorally, that motor memories last a lifetime, then how can we explain the fact that individual synaptic spines are constantly turning over and that aggregate synaptic strengths are constantly fluctuating? How can the memories outlast their putative constitutive components?"

Tee and Taylor  then tell us that this problem does not just involve motor memories:

"They further pointed out that this mystery existed beyond motor neuroscience, extending to all of systems neuroscience given that many studies have found such constant turn over of synapses regardless of the cortical region. In order words, synapses are constantly changing throughout the entire brain: 'How is the permanence of memory constructed from the evanescence of synaptic spines?' (Bizzi & Ajemian, 2015, p. 92). This is perhaps the biggest challenge against the notion of synapse as the physical basis of memory."

Tee and Taylor then discuss various experiments that defy the synaptic theory of memory storage.  Most of the studies are guilty of the same Questionable Research Practices that are so extremely common in neuroscience research these days, so I need not discuss them.  We hear on page 14 about various scientists postulating theories that are alternatives to the synaptic theory of memory storage:

"The logical question to pose at this point is: if memory information is not stored in the synapse, then where is it? Glanzman suggested that memory might be stored in the nucleus of the neurons (Chen et al., 2014). On the other hand, Tonegawa proposed that memory might be stored in the connectivity pathways (circuit connections) of a network of neurons (Ryan et al., 2015). Hesslow emphasized that memory is highly unlikely to be a network property (in disagreement with Tonegawa), and further posited that the memory mechanism is intrinsic to the neuron (in agreement with Glanzman) (Johansson et al., 2014)."

You get the idea? These guys are in disarray, kind of all over the map, waffling around between different cheesy theories of memory storage. All of the ideas mentioned above have their own fatal difficulties, reasons why they cannot be true.  In particular, there is no place in a neuron where memory could be written, with the exception of DNA and RNA; and there is zero evidence that learned knowledge such as episodic memories and school lessons are stored in DNA or RNA (capable of storing only low-level chemical information).  Human DNA has been extremely well-studied by long well-funded multi-year research projects such as the Human Genome Project completed in 2003 and the ENCODE project, and no one has found a bit of evidence of anything in DNA that stores episodic memory or any information learned in school.

Tee and Taylor then give us more examples of experiments that they think may support the idea of memories stored in the bodies of neurons (rather than synapses). But they fail to actually support such an idea because the studies follow Questionable Research Practices.  For example, they cite the study here, which fails to qualify as a robust well-designed study because it uses study group sizes as small as 9, 11 and 13. To give another example, Tee and Taylor cite the Glanzman study here, which  fails to qualify as a robust well-designed study because it uses study group sizes as small as 7. Alas, the use of insufficient sample sizes is the rule rather than the exception in today's cognitive neuroscience, and Tee and Taylor seem to ignore this problem.  

The heavily hyped Glanzman study (guilty of Questionable Research Practices) claimed a memory transfer between aplasia animals achieved by RNA injections. Such a study can have little relevance to permanent memory storage, because RNA molecules have very short lifetimes of less than an hour. 

Finally in Tee and Taylor's paper, we have a Conclusions section, which begins with this confession which should cause us to doubt all claims of neural memory storage: "After more than 70 years of research efforts by cognitive psychologists and neuroscientists, the question of where memory information is stored in the brain remains unresolved."  This is followed by a statement that is at least true in the first part: "Although the long-held synaptic hypothesis remains as the de facto and most widely accepted dogma, there is growing evidence in support of the cell-intrinsic hypothesis."  It is correct to call the synaptic memory hypothesis a dogma (as I have done repeatedly on this blog). But Tee and Taylor commit an error in claiming "there is growing evidence in support of the cell-intrinsic hypothesis" (the hypothesis that memories are stored in the bodies of neurons rather than synapses that are part of connections between neurons).  There is no robust evidence in support of such a hypothesis, and the papers Tee and Taylor have cited as supporting such a hypothesis are unconvincing because of their Questionable Research Practices such as too-small sample sizes. 

On their last two page the authors end up in shoulder-shrugging mode, saying, "while the cell might be storing the memory information, the synapse might be required for the initial formation and the subsequent retrieval of the memory."  We are left with the impression of scientists in disarray, without any clear idea of what they are talking about, rather like some theologian speculating about exactly where the angels live in heaven, bouncing around from one idea to another.  In their last paragraph Tee and Taylor speculate about memories being inherited from one generation to another by DNA, which is obviously the wildest speculation. 

Our takeaway from Tee and Taylor's recent paper should be this: scientists are in baffled disarray on the topic of memory. They have no well-established theory of memory storage in the brain, and are waffling around between different speculations that contradict each other.  We are left with strong reasons for suspecting that scientists are getting nowhere trying to establish a theory of memory storage in the brain.  This is pretty much what we should expect if memories are not stored in brains, and cannot be stored in brains.  Always be very suspicious when someone says something along the lines of, "What scientists have been teaching for decades is not true, but they have a new theory that has finally got it right." More likely the new theory is as false as the old theory. 

If anyone is tempted to put credence in this "cell-intrinsic hypothesis" of memory storage, he should remind himself of the physical limitations of DNA.  DNA uses what is called the genetic code. The genetic code is shown below. The A, C, T and G letters at the center stand for the four types of nucleotide base pairs used by DNA:  adenine (A), cytosine (C), guanine (G), and thymine (T). Different triple combinations of these base pairs stand for different amino acids (the twenty types of chemicals shown on the outer ring of the visual below). 

So DNA is profoundly limited in what it can store. In the human body DNA can only store low-level chemical information. We know of no way in which DNA in a human body could store any such things as information learned in school or episodic memories.  Such things cannot be stored using the genetic code used by DNA.  No one has ever found any evidence that strings of characters (such as memorized text) are stored in human DNA, nor has anyone found any evidence that visual information is stored in human DNA. Moreover, if we had to write memories to DNA or read memories from DNA, it would be all-the-more impossible to explain the phenomena of instant memory formation and instant memory retrieval. 

Some have suggested that DNA methylation marks might be some mechanism for memory storage. This idea is very unbelievable. DNA methylation is the appearance of a chemical mark on different positions of DNA.  The chemical mark is almost always the same H3C addition to the cytosine nucleotide base pair.  These chemical marks serve as transcription suppressors which prevent particular genes from being expressed. Conceptually we may think of a DNA methylation mark as an "off switch" that turns off particular genes. 

The idea that the collection of these chemical "off switches" can serve as a system for storing memories is unbelievable. DNA is slowly read by cells in a rather sluggish process called transcription, but there is no physical mechanism in the body for specifically reading only DNA methylation marks. If there were anything in the body for reading only DNA methylation marks, it would be so slow that it could never account for instant memory recall.  We know the purpose that DNA methylation marks serve in the body: the purpose of switching off the expression of particular genes. Anyone claiming that such marks also store human memories is rather like some person claiming that his laundry detergent is a secret system for storing very complex information. 

A metric relevant to such claims is the maximum speed of DNA transcription. The reading of DNA base pairs occurs at a manumum rate of about 20 amino acids per second, which is about 60 nucleotide pairs per second.  This is the fastest rate, with preparatory work being much slower. DNA methylation occurs only for one of the four base pairs, meaning that no more than about 15 DNA methylation marks could be read in a second (after slower preparatory work is done).  

Let us imagine (very implausibly) that DNA methylation marks serve as a kind of binary code for storing information.  Let us also imagine (very implausibly) that there is a system by which letters can be stored in the body, by means of something like the ASCII code, and by means of DNA methylation.  Such a system would have storage requirements something like this:


ASCII number equivalent

Binary equivalent










Under such a storage system, once the exact the spot had been found for reading the right information (which would take a very long time given that the brain has no indexing system and no position coordinate system), and after some chemical preparatory work had been done to enable reading from DNA, information could be read at a rate of no more than about four characters per second. But humans can recall things  much faster than such a rate. When humans talk fast, they are speaking at a rate of more than two words per second (more than 10 characters per second).  So if you ask me to describe how the American Civil War began and started and ended, I can spit out remembered information at a rate several times faster than we can account for by a reading of DNA methylation marks, even if we completely ignore the time it would take to find the right little spot in the brain that stored exactly the right information to be recalled. 

A realistic accounting of the time needed for memory recall of information stored in binary form by DNA methylation would have to add up all of these things:
  • The time needed for finding the exact spot in the brain where the correct recalled information was stored (requiring many minutes or hours or days, given no indexing and no coordinate system in the brain);
  • The time needed for chemical preparatory work that would have to be done before DNA can be read (such as the time needed to get RNA molecules that can do the reading);
  • Reading DNA methylation marks (encoding binary numbers) at a maximum rate of no more than four characters per second (and usually a much slower rate because of a sparse scattering of such marks);
  • Translating such binary numbers into their decimal equivalent;
  • Translating such decimal numbers into character equivalents;
  • Translating such retrieved letters into speech.
All of this would be so slow that if memories were stored as DNA methylation marks, you would never be able to speak correct recalled information at a rate a tenth as fast as two words per second, as humans can do. Similarly, you would never be able to form new memories instantly (as humans are constantly doing) if memory storage required writing binary information as DNA methylation marks, which would be a very slow process.  Humans can form new memories at the same rate at which they can recall memories. Suppose you are leaving to go food shopping and someone in your house says, "Please buy me a loaf of whole wheat bread and some orange juice." You may form a new memory of those exact words, at a rate of two words per second.  Storing such information as DNA methylation marks would be much slower than such a rate. 

I may note that while scientists can read DNA and DNA methylation marks from neural tissue, no one has ever found the slightest speck of human learned information stored in DNA or DNA methylation marks, synapse strengths, or any other type of representation in the brain; nor has anyone found any evidence of any coding scheme by which letters or numbers or visual images are stored in human DNA or DNA methylation marks.  When brain surgeons remove half of a brain (to treat very severe seizures) or remove portions of a brain (to treat severe epilepsy or cancer), they discard the cut-out brain tissue, and do not try to retrieve memory information stored in it.  They know that attempting such a thing would be utterly futile. 

Wednesday, December 15, 2021

Scientific American's "New Clues" on Mind Origins Sound Like a Handful of Moonbeams

Scientific American recently published an article by two biology professors, an article on the origin of mind.  We have a clickbait title of "New Clues About the Origin of Biological Intelligence," followed by a misleading subtitle of "A common solution is emerging in two different fields: developmental biology and neuroscience."  Then, contrary to their subtitle, the authors (Rafael Yuste and Michael Levin) state, "While scientists are still working out the details of how the eye evolved, we are also still stuck on the question of how intelligence emerges in biology."  So now biologists are saying they are still stuck on both of these things? 

Funny, that's a claim that contradicts what biologists have been telling us for many decades. For many decades, biologists have made the bogus boast that the mere "natural selection" explanation of Charles Darwin was sufficient to explain the appearance of vision, a claim that has never made any sense,  because so-called natural selection is a mere theory of accumulation that does not explain any cases of vast organization such as we see in vision systems and their incredibly intricate biochemistry.  Vastly organized things (such as bridges and cells and TV sets and protein complexes) are not mere accumulations (examples of which are snowdrifts, leaf piles and drain sludge buildup). And biologists have also for many decades been making the equally bogus boast that they understand the origin of human minds, based on the claim that it was just an evolution of bigger or better brains (a claim that is false for reasons explained in the posts on this blog). 

It would be great if our Scientific American article was a frank explanation of why scientists are stuck on such things.  But instead the article is an example of a staple of science literature: an article that not-very-honestly kind of claims "we're getting there" on some explanatory problem which scientists are actually making little or no  progress on. To read about the modus operandi of many articles of this type, read my post " 'We're Getting There' Baloney Recurs in Science Literature." 

We quickly get an inkling of a strategy that will be used by the authors.  It is a strategy similar to the witless or deceptive strategy Charles Darwin used in The Descent of Man when he claimed this near the beginning of Chapter 3: “My object in this chapter is to show that there is no fundamental difference between man and the higher mammals in their mental faculties." The statement was a huge falsehood, and it is easy to understand why Darwin made it. The more some biologist tries to shrink and minimize the human mind,  like someone saying the works of Shakespeare are "just some ink marks on paper," the more likely someone may be to believe that such a biologist can explain the mind's origin. The more a biologist  dehumanizes humans, making them sound like animals, the more likely someone may be to think that such a biologist can explain the origin of humans. 

Rather seeming to follow such a strategy, the authors (Yuste and Levin) try to fool us into thinking there is nothing very special about intelligence. They write this:

"In fact, intelligence—a purposeful response to available information, often anticipating the future—is not restricted to the minds of some privileged species. It is distributed throughout biology, at many different spatial and temporal scales. There are not just intelligent people, mammals, birds and cephalopods. Intelligent, purposeful problem-solving behavior can be found in parts of all living things: single cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks."

Notice the gigantically shrunken and downgraded definition of intelligence, as a mere "purposeful response to available information."  Under such a definition, a smoke detector is intelligent, and bicycle brakes are intelligent (because they respond to information about foot pressure or hand pressure); and an old round 1960's Honeywell thermostat is also intelligent, because if I set the thermostat to 70, and it got much colder outside, the thermostat turned up the heat to keep the temperature at 70.  But smoke detectors and bicycle brakes and old Honeywell thermostat are not intelligent, and neither are the much newer computerized thermostats that are marketed as "intelligent thermostats."  

The Merriam-Webster dictionary gives us two definitions of intelligence: 

"(1) the ability to learn or understand or to deal with new or trying situations REASON.

(2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)."

Very obviously, such a definition does not apply to some of the things that our Scientific American biologists have claimed are intelligent: "single cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks.Such things may be driven or may have been designed by some mysterious intelligent power greater than the human mind, but they are not intelligent themselves.  Protein molecules, ribosomes and individual cells do not have minds or intelligence.  Rather than referring to such things as examples of "biological intelligence," Yuste and Levin should have merely called such things examples of "biological responsiveness." 

Our authors then give us a paragraph that is misleading and poorly reasoned. We read this:

"A common solution is emerging in two different fields: developmental biology and neuroscience. The argument proceeds in three steps. The first rests on one of natural selection’s first and best design ideas: modularity. Modules are self-contained functional units like apartments in a building. Modules implement local goals that are, to some degree, self-maintaining and self-controlled. Modules have a basal problem-solving intelligence, and their relative independence from the rest of the system enables them to achieve their goals despite changing conditions. In our building example, a family living in an apartment could carry on their normal life and pursue their goals, sending the children to school for example, regardless of what is happening in the other apartments. In the body, for example, organs such as the liver operate with a specific low-level function, such as controlling nutrients in the blood, in relative independence with respect to what is happening, say, in the brain."

The claim that "modularity" was one of "natural selection's first and best design ideas" is false. A module is defined by the Cambridge Dictionary as "one of a set of separate parts that, when combined, form a complete whole."  In computing and spacecraft and education, each module is itself a complex thing that can exist independently, and such complex modules can be combined to form units of greater complexity. A classic example of modularity is the Lunar Excursion Module (LEM) of the Apollo spacecraft, which detached from the main spacecraft to land on the moon, returning later to reunite with the main spacecraft.  Nowhere did Darwin discuss modules.  Darwin's idea was that complex things arise by an accumulation of countless tiny changes.  Such an idea is very different from thinking that very complex organisms arise from a combination of modules.  And complex organisms do not arise from a combination of independent modules. The organs of the human body are not at all independent of each other. Every organ in the body depends on the correct function of several other organs in the body, besides having additional bodily dependencies. 

The claim the authors make of a liver existing "in relative independence" is untrue.  A liver would shut down within a single day if either the heart or the lungs or the brain were removed (brains are necessary for the autonomic function of the heart and the lungs).  The liver would not last more than a few weeks if the kidneys or the stomach were removed.  Instead of being independent modules, the cells and organs of the body are gigantically interdependent. The existence of such massively interdependent objects in bodies (with so many cross-dependencies)  makes it a million times harder for biologists to credibly explain biological origins, and makes a mockery of their boastful claims to understand such origins. So it is no surprise that biologists frequently resort to misleading statements denying or downplaying such massive interdependence, statements like the statement I quoted in italics above.  

The diagram below gives us a hint of the cross-dependencies in biological systems, but fails to adequately represent them. A better diagram would be one in which there were fifty or more arrows indicating internal dependencies. 

complex biological systems

Our authors have not even got apartment buildings right.  I live in an apartment that is one of many in my building. My apartment is certainly not an independent module. It is dependent on the overall plumbing system and gas system and heating system and electrical system shared by the entire building. 

The authors (Yuste and Levin) then discuss hierarchical organization. Hierarchical organization is certainly a very big aspect of physical human bodies. Subatomic particles are organized into atoms, which are organized into amino acids, which are organized into protein molecules, which are organized into protein complexes, which are organized into organelles, which are organized into cells, which are organized into tissues, which are organized into organs, which are organized into organ systems, which are organized into organisms.  This is all the greatest embarrassment for today's biologists, who lack both a theory of the origin of hierarchical organization, and any theory at all of biological organization (Darwinism being a mere theory of accumulation, not a theory of organization).  

Contrary to what our Scientific American authors insinuate, hierarchical organization is not a good description of minds. Our minds have no organization anything like the hierarchical organization of our bodies. So our authors err by suggesting  hierarchical organization as some kind of "new clue" in understanding the origin of minds.  Here is their vaporous reasoning with no real substance behind it:

"In biology, different organs could belong to the same body of an organism, whose goal would be to preserve itself and reproduce, and different organisms could belong to a community, like a beehive, whose goal would be to maintain a stable environment for its members. Similarly, the local metabolic and signaling goals of the cells integrate toward a morphogenetic outcome of building and repairing complex organs. Thus, increasingly sophisticated intelligence emerges from hierarchies of modules."

This is nothing remotely resembling a credible explanation for the origin of human minds that can do math and philosophy and abstract reasoning. The last sentence of the paragraph uses "thus" in a very inappropriate way, for none of the preceding talk explains how humans could get minds. Our minds are not "hierarchies of modules."  Instead of being independent modules, different aspects of our minds are very much dependent on other aspects of our minds.  Complex thought and language and memory and understanding are not independent modules. With very few exceptions, you cannot engage in complex thought without language and memory; and every time you use language you are relying on memory and understanding (your recall of the meaning of words); and you can't understand much of anything without using your memory. 

Next our Scientific American authors speak in a not very helpful way, using the term "pattern completion" in a very strange way.  Very oddly, they state this:

"A third step in our argument addresses this problem: each module has a few key elements that serve as control knobs or trigger points that activate the module. This is known as pattern completion, where the activation of a part of the system turns on the entire system."

Whatever the writers are talking about, it does nothing to explain minds. Yuste and Levin end by trying to cite some research dealing with this "pattern completion" effect they referred to. They cite only a paper that seems to be guilty of the same Questionable Research Practices that most neuroscience experiments are guilty of these days.  It is a mouse experiment that used too-small study group sizes, such as study groups of 6 mice and 7 mice and 9 mice. The authors of the paper state, "We did not use a statistical power analysis to determine the number of animals used in each experiment beforehand." Such a confession is usually made when experimenters have used way-too-small sample sizes, using far fewer than the 15 subjects per study group recommended for robust results. The authors tell us "experimental data were collected not blinded to experimental groups," and makes no claim that any blinding protocol was used.  The paper is therefore not robust evidence for anything supporting the claims of the authors of the Scientific American article.  Because of its procedural defects, the paper provides no robust evidence for what Yuste and Levin claim, that "fascinating pattern-completion neurons activated small modules of cells that encoded visual perceptions, which were interpreted by the mouse as real objects."  The only other paper cited by Yuste and Levin is a self-citation that has nothing to do with the origin of minds. 

Instead of giving us any actual encouragement that scientists have "new clues" as to the origin of minds, the Scientific American article rather leaves us with the impression that mainstream scientists have no good clues about such a thing. You could postulate a credible theory about the origin of human minds, but the "old guard" editors of Scientific American would never publish it. 

What is going on in Levin's latest Scientific American article is the same kind of inappropriate language that Levin abundantly used in a long article he co-authored with Daniel Dennett, one entitled "Cognition All the Way Down." In that article, Levin and Dennett use the word "cognition" and "agents" to refer to things like cells that have neither minds nor cognition.  I don't think either Levin or Dennett actually believe that cells have minds or cognition. Their article reads like something a person might write if he did not believe that cells actually have minds and selves and thoughts, but if he merely thought that speaking as if cells are "agents" with "cognition" is a convenient rhetorical device. The Cambridge Dictionary defines cognition as "the use of conscious mental processes." The same dictonary defines an agent as "a person who acts for or represents another."  

What seems to be  going on is simply that words are being used in improper ways, like someone using the word "gift" to describe a bombing.  It's just what we would expect from Darwinists,  for improper language has always been at the center of Darwinism from its very beginning.  At the heart of Darwinism is the misnomer  phrase "natural selection," which refers to a mere survival-of-the-fittest effect that is not actually selection (the word "selection" refers to choice made by conscious agents).  We should not be surprised that some thinkers who have for so long been talking about the selection-that-isn't-really-selection are now speaking about agents-that-aren't-really-agents and cognition-that-isn't-really-cognition and intelligence-that-isn't-really-intelligence.