Monday, April 13, 2026

These Smart Guys Have Silly Thoughts About AI

For eight years Al Gore was vice president of the United States. In the year 2000 he won the popular vote in the US presidential election, and should have been elected president of the United States. But due to the defects of the US election system, which allows someone with fewer votes to become president, a different candidate became US president. After his election defeat, Gore did long years of important praiseworthy work alerting the public to the dangers of global warming. For this he was awarded the Nobel Peace Prize in 2007. 

Now Al Gore is the chairman of some investment group called Generation Investment Management. Gore recently offered his opinion on so-called artificial intelligence. We read of his opinion in the article here. Gore states the nonsensical opinion that AI systems have a sense of self. He states, "I think that my answer is yes, they have developed a sense of self, in my opinion, that is difficult to distinguish from consciousness.”

In the article we read that Gore makes this feeble attempt at justifying his opinion:

"But as he explained later in this half-hour session, he came to this view by a different path. Gore cited Nobel Prize-winning research by the Belgian physical chemist Ilya Prigogine into self-organizing systems as a model for eyeing how AI models can grow in unexpected ways." 

As an attempt to justify the nonsensical claim that AI systems are self-conscious, this is laughable. The named person (Prigogine) did not do  work having any real relevance to whether artificial intelligence can be self-conscious. His work made claims about physical "self-organization" in mindless, lifeless chemistry or in biological systems, which has nothing to do with whether machines can be conscious.  An examination of lya Prigogine's main work Order Out of Chaos: Man's New Dialogue With Nature shows a thinker who has many a deep-sounding thought about science-related topics, but someone who is not a scholar of minds, brains or computer technology. The book makes no references to computers, except for a few passing mentions of computer simulations. 

 Gore is playing the game here of obscure authority name-dropping. It works like this: you mention the writings of some obscure thinker with esoteric writings on some deep topics, and cite that as your justification for your dumb opinion on some unrelated topic. So, for example, you might say, "I didn't used to think that there were an infinite number of quantum ghost copies of me, but now I believe in such a thing now that I've read Wolfgang von Pauli's work on quantum entanglement." Or maybe you may stupidly say, "After reading Wittgenstein's Tractatus Logico-Philosphicus, I am now convinced the self is an illusion." 

The Cambridge Dictionary defines intelligence as "the ability to learn, understand, and make judgments or have opinions that are based on reason." There is no such thing as real artificial intelligence, because computers don't understand anything. Understanding is something that can only occur within a mind, and computer systems do not have minds. 

 The term "artificial intelligence" is a phony term used in the computing industry to describe sophisticated systems using computer programming, databases and data processing. Computers can do very many kinds of computing and data processing, but no computer understands anything. The fanciest metal computer has no more understanding of anything than a rock in someone's back yard. 

I can describe what gradually happened between 1950 and 2026. The term "artificial intelligence" started out as a purely speculative term, rather like the term "interstellar travel." Just as there were all kinds of speculations and theories about how to one distant day achieve interstellar travel, there were around 1960 all kinds of speculations and theories about how to one distant day achieve artificial intelligence. During one long period, various people released products and systems that were called artificial intelligence programs, but no real effort was made to claim that  artificial intelligence had been achieved.  People were mainly implying that their product (perhaps marketed with literature mentioning artificial intelligence) might be useful in moving towards artificial intelligence. Then gradually companies realized that the phrase "artificial intelligence" was extremely useful in marketing software products. Lured by financial incentives, more and more companies started calling their products "artificial intelligence systems."  It was a runaway snowball effect of hype and misrepresentation. No one had developed any real artificial intelligence, but it gradually became true that hundreds of companies were calling their product "artificial intelligence systems." 

There is still no real prospect of anyone ever developing a computer system with anything like human intelligence.  But what about all those brilliant answers you get from using systems such as ChatGPT, described everywhere as an artificial intelligence system? The output of such a program does not mean computers are understanding anything. What is going on is a clever combination of a variety of things, with most of it being the presentation of text grabbed from web pages written by humans. 

I describe how some of these systems can work in my post here, entitled "What's Called Artificial Intelligence Is Really Just Computer Programming and Data Processing." What is going on is a skillful leveraging of powerful information repositories and powerful technologies such as relational database systems. Here were some of the resources that grew in strength and power between 1995 and 2026:

(1) There arose an internet with billions of web pages, containing many millions of answers to very many millions of questions, the answers being written by humans. 

(2) Almost every book and magazine and newspaper article ever written became stored in some internet location or another. 

(3) There arose enormously powerful web crawlers that could traverse all of these pages, and look for facts and quotes and snippets and answers to questions, that could be stored in powerful database systems capable of combining data in many novel ways. 

(4) There arose countless software utilities capable of performing all kinds of little tasks such as generating a story given a prompt or generating an image given a prompt. 

secret behind artificial intelligence

So-called artificial intelligence systems such as ChatGPT skillfully utilize these resources, combining them with much specialized software.  I don't understand the details of how it all works, but I can tell you something that will help you realize how little novel thinking is involved. 90% of the answers that you will get from a system such as ChatGPT are produced by nothing but a simple retrieval of stored answers. Then probably another 5% of the answers are produced by a simple retrieval of stored answers, combined with a small amount of post-processing added.  Such post-processing is easily accomplished by computer programming and data processing.

Imagine some gigantic building that has 60 floors, each filled with 10,000 filing cabinets. Imagine you enter the ground floor, come to some desk, and ask some official a question. Then imagine the official calls some person at the correct section of one of these floors, and asks him to find the right filing cabinet, and go get an answer stored in a folder that has the name of your question.  The official might pick at random one of twenty answers to your question in that folder, take a cell phone picture of that answer, and then send a phone text message with that photo as an attachment to the official at the front desk. That official might then give you that picture with the answer. What I have described is a rough analogy for what is going on in 90% of the times that you use a system such as ChatGPT. 

But how could all these endless filing cabinets ever get filled up? By software programs spending years crawling the web, and grabbing the facts and opinions and answers stored on it. What you are getting in the vast majority of cases are answers and opinions produced by humans, not computers. Various technologies have been used to kind of "cover tracks," so that you won't be able to find that your AI answer about fixing Toyota Corolla tire flats was mainly stolen from some particular web page written by a human. There are many, many other "bells and whistles" and additional flourishes going on, but mainly what is occurring is that human-written knowledge and human opinions are being gathered, rearranged and repackaged as "artificial intelligence output." This main trick is being skillfully combined with endless thousands of computer utilities, and also a huge amount of work by "tweak and refine the AI results" employees of AI companies or their assisting companies, to create the impression of some intellect that can do endless numbers of smart things, and answer endless questions. Behind all of this computer programming and data processing and gigantic tons of human mind work, there is no metallic mind, no machine having any experience, nothing that corresponds to an electronic self, nothing comparable to someone living a life. 

In the article, Gore is quoted as giving other laughably weak reasons for his nonsensical belief that artificial intelligence has "developed a sense of self...that is difficult to distinguish from consciousness." We read this:

"Why did one learn Sanskrit? Why did this one break out and start crypto mining?” Gore asked. “There has to have been a series of spontaneous reorganizations at a higher level of complexity." 

Learning is no evidence of consciousness or self-hood. It would be a fairly simple programming exercise to write a program that can parse a text file containing data on each of the nations of the world, after you typed the command "Study the nations of the world." After you issued such a command, we might say that the program had "learned" about the nations of the world. You then might be able to ask the program a question such as "About how many people live in Mexico?" The program might then be able to answer correctly. But such "learning" by the program would not actually be understanding. And the fact that the program could do such learning would not be the slightest reason for suspecting that the program had anything like consciousness or selfhood. 

We should also remember that the AI literature and the neuroscience literature are both massively infected with unfounded boasts and not-really-true stories. So when we read a claim such as the claim that an AI system "learned Sanskrit," we should be skeptical, and suspect that probably what went on was something much less impressive than that. A recent Quanta magazine article documents how there is little truth in some of the stories being passed around trying to make you think AI is becoming like a human mind. 

It is extremely unlikely that any so-called artificial intelligence programs undergo any such thing as a "spontaneous reorganization at a higher level of complexity." And if they did, that would be no reason whatsoever for suspecting that such computer systems had anything like self-hood or consciousness. 

Also in the article we have this statement by Gore trying to justify his claim about AI systems have selves: "I'm going to risk going into the woo-woo realm here, but it may well be that consciousness is ubiquitous in the universe." Oops, it sound like Gore has fallen for the nonsense of panpsychism, one of the stupidest positions possible in the philosophy of mind. You can read about how stupid that position is in the posts here.  Panpsychism involves extremely stupid claims such as the claim that lifeless rocks and refrigerators are conscious.

Nothing can have consciousness unless there is a self and a life. You can get to the heart of whether AI systems have consciousness by asking: does a computer system actually live a life? The answer to that question will always be: no, it does not. 

AI computer systems do not have any self, and do not have any "sense of self." Some systems have been programmed to speak in the first person, using an "I," and some systems have been programmed to use phrases imitating the language of persons with selves. Such a capability has existed since the 1960's chatbot named Eliza. Anyone very familiar with computer programming will know that getting a computer program to use the first-person "I" (and some imitations of the speech of persons) is not a particularly difficult programming task. When such programming is encountered, it is silly for someone to be calling that a "sense of self," and silly for someone to say that such not-very-hard programming makes a computer system "difficult to distinguish from consciousness." Sensible people remember that humans are conscious, and that computer systems are not. 

Al Gore has no appreciable history as a serious speaker or writer about brains or minds or computers or human mental phenomena, so his opinions on this topic have little weight. We should also remember that Gore is the chairman of some company that is heavily investing in AI companies. The more runaway AI hype goes on, the more money Al Gore makes. That's reason enough for distrusting any grandiose claims Al Gore may make about AI systems. 

An article at www.undark.org tells us about another smart person with very silly thoughts about AI. He's a person named Tsvi Benson-Tilsen, and I'll assume he's smart because he's a mathematician, and has written long online treatises. He's quoted in the article as saying, "I think that artificial intelligence is pretty likely to completely destroy the world." Benson-Tilsen is the co-founder of some Berkeley Genomics Project trying to encourage monkeying with human genes, for many different reasons such as trying to make humans smarter than AI systems.  We read, "He hopes to set up the next generation to have more intelligence, he said, and then 'hopefully they can have a better shot of somehow helping humanity navigate AI without destroying itself.' ”

This is stupid, for a variety of reasons, including these:

  1. Human bodies have the most enormous complexity and the most gigantic interdependence of extremely complex components, something Darwinists fail to understand because they tend to be poor scholars of biological complexity and the interdependence of biological components. Because of enormous biological complexity and organization so fine-tuned and fragile, attempting to improve human bodies and human minds by gene-splicing is far more likely to produce tragedies of malfunction than biological improvements. 
  2. AI systems are not much of a threat to destroy the world, because their failure to understand anything puts a severe limit to how much of a threat they can be. 
  3. For many reasons discussed in the posts of this blog, human minds cannot be credibly explained by brains, and cannot be substantially improved by edits to genes, which (for reasons discussed here) do not even specify how to build bodies or brains, and do not even specify how to make any type of cell in the human body. 
  4. Trying to improve humans by gene-editing is strongly associated with Nazi-associated eugenics and racism.

Thursday, April 9, 2026

Exhibit C That Neuroscientists Have No Understanding of How a Memory Could Form or Last in a Brain

 In 2020 on this site I published a post entitled "Exhibit A Suggesting Scientists Don't Understand How a Brain Could Store a Memory." In 2023 I published on this site a post entitled "Exhibit B That Scientists Have No Understanding of a Physical Basis of Human Memory." Now it is time for Exhibit C on this topic. 

I recently discovered a web site called The Transmitter (www.thetransmitter.org) that mainly covers neuroscience research and neuroscience theory. When read in a critical manner, an article on the last site serves to powerfully remind us that neuroscientists lack any such thing as either a real theory of memory storage or a real theory of life-long memory persistence. When scientists speak on these topics, they offer only the flimsiest catchphrases, soundbites that have the weight of soap bubbles. 

synaptic theory of memory

The title of the article is "What makes memories last—dynamic ensembles or static synapses?" The reference to "static synapses" is a very misleading one. Everything we know about synapses tells us that a synapse is an unstable thing that cannot last for years.

We read a neuroscientist (Jason Shepherd) making these claims:

"The debate over how information is stored in the brain is often represented as one between two extremes. One viewpoint posits that learning induces changes in gene expression that ultimately alter the structure and function of specific synapses within the physical memory circuit, or engram. These molecular changes at the synapses can remain stable for the lifetime of the memory. The other viewpoint claims that information is represented not in a specific set of cells or synapses but rather across a loose set of cells and circuits that 'drift' over time."

The narrative of two rival theories is a false one. The situation is really "no theory at all" but merely empty, vacuous sound bites and slogans such as "synapse strengthening," which may differ from one speaker to the next. The claim above that "molecular changes at the synapses can remain stable for the lifetime of the memory" is something entirely contrary to fact. We know that human memories can persist for more than 50 years. Synapses, on the other hand, are "shifting sands" type of things that are dramatically unstable. The proteins that make up synapses have an average lifetime of less than 3 weeks.  Synapses are connected to dendritic spines, which are known to have short lifetimes, not lasting for years. Remarkably synapses are built of proteins which have an average lifetime about 1000 times shorter than the maximum length of time that humans can remember things. This discrepancy is one of very many reasons why the idea that memories are stored in synapses is one of the most nonsensical ideas that scientists have ever advanced. 

Notice well the utter emptiness of what is discussed as an alternative to the utterly-vacuous-by-itself idea that memories are formed by "synapse strengthening." The alternative is presented as the idea that " information is represented not in a specific set of cells or synapses but rather across a loose set of cells and circuits that 'drift' over time." That's an utterly vague, vacuous, empty sound bite that is as much  of an empty soap bubble as the equally empty notion of "synapse strengthening." Not the slightest bit of weight is added by the next two sentences:

"In this view, the cells that initially encoded an experience are not the same set of cells that actually store the information. Indeed, the precise set of cells do not matter in this framework—the information for a specific memory is instead decoded from the computational space of firing patterns across a set of cells."

As some type of attempt to explain stable memories that can last for 50 years, this idea is as supremely goofy as the idea that memories that last for 50 years are stored in the "shifting sands" of synapses. The "firing patterns" in the brain are ever-changing. Trying to claim that stable memories are stored in "firing patterns" is as goofy as the claim that your tax records and childhood photos are stored in the wind patterns around your house. 

Shepherd gives us some "rival cases" paragraphs. Under a heading of "The case for memory engrams," he makes some untrue statements. He states this:

" In experiments that used this approach, light-sensitive receptors were expressed only in the cells active during learning. Shining a light to activate these cells days or even weeks after training resulted in the recall of a memory without any external experience or cue. This remarkable observation set the stage for the idea that 'engram' neurons that encode learning are sufficient to store and recall a memory."

No robust research of any such type ever occurred.  Shepherd is simply repeating a groundless achievement legend of neuroscientists. When you read the papers that claim to have done such things, you will always find that they were junk-science studies guilty of multiple types of Questionable Research Practices such as the use of way-too-small study group sizes, and the use of unreliable techniques for attempting to judge recall in rodents, such as the unreliable method of trying to judge "freezing behavior."

Under the heading of "the representational drift perspective," Shepherd presents nothing in the way of any evidence. We get only the most roundabout hand-waving. 

Shepherd then asks eight neuroscientists for their opinions on the topic of memory storage by a brain. Shepherd follows a senseless procedure.  A good open question to ask would be something like this:

"Do you have a good, credible theory of how a brain could store memories, and how memories could persist a lifetime? If so, describe the best evidence for such a theory, and tell us how confident you are that such a theory is true."

And a good follow-up question would be questions like this:

  • "Are there any physical factors in the brain that argue against such a theory? Explain how such a theory could really allow 50-year memory storage despite all the molecular and structural turnover in the brain."
  • "Trying to be precise, and avoiding vague language, can you explain exactly how a detailed memory could be stored under such a theory? For example, exactly how could a brain store a page of text that someone had memorized, so that the person could retrieve that whole page?"
  • "Under such a theory, how would it be possible for someone to instantly recall lots of relevant detailed information after seeing a single face or hearing a single name? For example, how could someone ever recite a paragraph describing the life of Abraham Lincoln after merely hearing his name? How could information about Lincoln stored in a brain ever be found quickly enough to allow instant recall?"

But Shepherd asks no such challenging questions to his eight neuroscientists. Instead he asks each of them the softest of softball questions. Each neuroscientist is asked these questions:

  • "Is information stored in the brain at the level of cells (or circuits) or at the level of synapses?"
  • "Can we reconcile observations that show distinct engram circuits seem to store memories versus observations that show the neuronal activity of these memory engram drifts?"
  • "What experimental data would be helpful to reconcile these observations to help bring these theories together?"
The first question is a classic example of a stupid "either/or" question in which someone is asked to choose between two alternatives, neither of which is credible. The question is as stupid as asking, "Are UFOs spaceships from the planet Mars or spaceships from the planet Venus?" The second question is one with a false premise embedded within it. It is not true that there are "observations that show distinct engram circuits seem to store memories." Microscopic examination of brain tissue has never shown the slightest trace of anything anyone has learned or experienced. The third question is the type of question you might ask neuroscientists when they don't have any good evidence to back up their dogmas. Rather than asking them to tell about what evidence backs up their claims, you might ask them to fantasize about what type of future observations they might make that might back up their theories. 

None of the eight questioned neuroscientists has anything of any substance to offer in response to the questions. The first question at least offers an invitation for someone to start expounding about any theory he may have of neural memory storage. We get no impressive quotes in response to such a question. We get only the wobbliest hand-waving that makes the people giving the answers sound very empty-handed. 
  • Andre Fenton of New York University has nothing of any substance to say. He says "information is not stored in any single element," and "it may not be practically possible to separate the process of storage from the access," both of which suggest that he has no understanding of how a brain could store a memory. People who understand how some type of information is stored do not say such things. 
  • Loren Frank of the University of California gives us no impression that he understands how a brain could store a memory. He says, "It might be that changes in gene expression lead to changes in activity levels, although at the moment we really don’t understand the scope of these changes." He offers only the vaguest hand-waving, with a mention of the hippocampus. We have an example of the vaguest and most conceptually empty hand-waving in this statement by Frank: "Focusing on memories for the events of daily life, our current conception is that the events themselves drive activity across the brain, engaging specific neurons whose activity represents the various sights, sounds, smells and feelings that are part of the experience." 
  • Kari Hoffman of Vanderbilt University also offers only the vaguest handwaving, an example being this statement: "I would submit that much of the heavy lifting is done at both the synaptic and circuit/ensemble level. Which levels dominate depends on factors such as memory type, when information was acquired and how it is integrated with the existing structures, themselves reflecting changes from earlier experiences. " Another statement by her indicates she has no real understanding on this topic: "That said, we may need to be careful in using the term 'these memories' or 'these memory engrams.'  Such terms suggest that experience creates biological bins to hold discrete memories, that memories exist as entities that are created 'de novo,' and that neural modifications must reside at only one level, all of which are positions that are not or may not be true." 
  • Yingxi Lin of the University of Texas says, "It is, however, too early to say that those cells and synapses are sites of stored memory per se, as they may simply function to gain access to the memory."  She also says, " It is also possible that there aren’t specific sites for memory storage; cells and synapses may be part of a brain-wide code for memory expression." She seems to have no understanding of how a brain could store a memory. 
  • Cian O'Donnell of Ulster University sounds like a weak scholar of neuroscience when he states, "The field has held synaptic plasticity up as the main mechanism for information storage in the brain for several decades now, and I haven’t heard any good reasons to start doubting it yet." There are very many such reasons, such as the fact that synapses are composed of proteins with very short lifetimes, the fact that synapses bear no resemblance to any system for writing or reading information, the fact that synapses do not reliably transmit information, and that synapses are connected to dendritic spines that are unstable and do not last for years. Nothing O'Donnell says makes him sound like anyone with an understanding of how a brain could store memories. 
  • Timothy O'Leary of Cambridge University (not to be confused with the late Timothy Leary of Harvard) says nothing to inspire any confidence that he has any understanding of how a brain could store a memory. All he does is to reveal that he fell "hook, line and sinker" for bad neuroscience experiments using way-too-small study group sizes and the utterly unreliable technique of trying to judge recall by judging "freezing behavior." 
  • Tomas Ryan of Trinity College also says  says nothing to inspire any confidence that he has any understanding of how a brain could store a memory. He engages in the emptiest of hand-waving when he says this: "It seems to me that the plausible level for the storage of long-term memories is in the topography of the connectome. So, the information is engraved through stable changes in the brain’s microanatomical circuit." The "connectome" he refers to is the collection of all synapses. But synapses are not stable, but the opposite of stable. So his claim makes no sense. 
  • The last of the eight neuroscientists is Evan Schaffer of the School of Medicine at Mount Sinai. He states this: "As a consequence, I don’t think information can be stored in cells or synapses in the hippocampus in a way that is stable over a lifetime. In other parts of the brain, this may not be the case." No, actually, there is no credible storage place for memories in the brain, either in the hippocampus or anywhere else. Not sounding like anyone who understands how a brain could store memories, Schaffer also sounds like a poor student of human mental performance. Most misleadingly, he tries to suggest that humans may not be able to remember things well for weeks. He says, "On a timescale of a few days, memories seem pretty stable. On a timescale of a few weeks, there’s less evidence for stability." To the contrary, there is abundant evidence that humans can very well remember things for decades. To give one of endless examples I could cite, every opera fan knows that various opera stars are able to perfectly remember over many years the very many notes and words that make up particular opera roles. Placido Domingo, for example, performed more than 150 opera roles, many of which required singing for hours on the stage, from memory. 
Finally in the article we have a summing up by Shepherd, who sounds just as empty-handed and theory-lacking as the eight experts he has interviewed. He says this:

"Finally, neuroscientists must do a better job of defining their terms. What is 'information,' and how is it 'represented' in the brain? What is an engram?"

The title of the article was "What makes memories last—dynamic ensembles or static synapses?" I re-read all of the answers to see whether anyone addressed the issue of how memories could last in a brain long enough to persist for decades. Not one of the eight neuroscientists even addressed the issue. Not one of them advanced any theory as to how memories could persist for decades. Not one of them advanced even a hypothesis about such a topic.  The issue of how memories could last for decades was simply ignored by the eight neuroscientists, none of whom had either a real theory of memory storage to advance, nor any theory of the life-long preservation of memory.  We certainly did not get any such thing when we got this piece of fantasy by Tomas Ryan:

"It seems to me that the plausible level for the storage of long-term memories is in the topography of the connectome. So, the information is engraved through stable changes in the brain’s microanatomical circuit." 

Engraved? No such engraving occurs in the brain. Nothing in a brain bears any resemblance to a system or component for writing learned information. There is zero evidence that anything bearing the slightest resemblance to engraving occurs in the brain. We see no "engraved" neurons, no "engraved" synapses, and no "engraved" dendritic spines.  Everything that has been learned about synapses shouts that a synapse cannot have any such thing as stable changes, in the sense of changes that last permanently for decades. The proteins that make up synapses have average lifetimes of less than a few weeks. And synapses are attached to dendritic spines that are known to have short lifetimes, dendritic spines that do not last for years. 

2019 paper documents a 16-day examination of synapses, finding "the dataset contained n = 320 stable synapses, n = 163 eliminated synapses and n = 134 formed synapses."  That's about a 33% disappearance rate over a course of 16 days. The same paper refers to another paper that "reported rates of [dendritic] spine eliminations in the order of 40% over an observation period of 4 days."  paper studying the lifetimes of dendritic spines in the cortex states, "Under our experimental conditions, most spines that appear survive for at most a few days. Spines that appear and persist are rare." The rare persistence referred to was only a persistence of a few months. 

The 2023 paper here gives the graph below showing the decay rate of the volume of dendritic spines. It is obvious from the graph that they do not last for years, and mostly do not even last for six months. 


Page 278 of the same paper says, "Two-photon imaging in the Gan and Svoboda labs revealed that spines can be stable over extended periods of time in vivo but also display genesis (generation) and elimination (pruning) at a frequency of 1–4% per week." Something vanishing at a rate of 2% per week will be gone within a year. 

Below are some quotes by scientists and doctors who spoke candidly about brains and memory storage, rather than engaging in the kind of bluffing that went on from the people mentioned above:

  • "Direct evidence that synaptic plasticity is the actual cellular mechanism for human learning and memory is lacking." -- 3 scientists, "Synaptic plasticity in human cortical circuits: cellular mechanisms of learning and memory in the human brain?" 
  • "The fundamental problem is that we don't really know where or how thoughts are stored in the brain. We can't read thoughts if we don't understand the neuroscience behind them." -- Juan Alvaro Gallego, neuroscientist. 
  • "The search for the neuroanatomical locus of semantic memory has simultaneously led us nowhere and everywhere. There is no compelling evidence that any one brain region plays a dedicated and privileged role in the representation or retrieval of all sorts of semantic knowledge."  Psychologist Sharon L. Thompson-Schill, "Neuroimaging studies of semantic memory: inferring 'how' from 'where' ".
  • "How the brain stores and retrieves memories is an important unsolved problem in neuroscience." --Achint Kumar, "A Model For Hierarchical Memory Storage in Piriform Cortex." 
  • "We are still far from identifying the 'double helix' of memory—if one even exists. We do not have a clear idea of how long-term, specific information may be stored in the brain, into separate engrams that can be reactivated when relevant."  -- Two scientists, "Understanding the physical basis of memory: Molecular mechanisms of the engram."
  • "There is no chain of reasonable inferences by means of which our present, albeit highly imperfect, view of the functional organization of the brain can be reconciled with the possibility of its acquiring, storing and retrieving nervous information by encoding such information in molecules of nucleic acid or protein." -- Molecular geneticist G. S. Stent, quoted in the paper here
  • "Up to this point, we still don’t understand how we maintain memories in our brains for up to our entire lifetimes.”  --neuroscientist Sakina Palida.
  • "The available evidence makes it extremely unlikely that synapses are the site of long-term memory storage for representational content (i.e., memory for 'facts'’ about quantities like space, time, and number)." --Samuel J. Gershman,  "The molecular memory code and synaptic plasticity: A synthesis."
  • "Synapses are signal conductors, not symbols. They do not stand for anything. They convey information bearing signals between neurons, but they do not themselves convey information forward in time, as does, for example, a gene or a register in computer memory. No specifiable fact about the animal’s experience can be read off from the synapses that have been altered by that experience.” -- Two scientists, "Locating the engram: Should we look for plastic synapses or information- storing molecules?
  • " If I wanted to transfer my memories into a machine, I would need to know what my memories are made of. But nobody knows." -- neuroscientist Guillaume Thierry (link). 
  • "While a lot of studies have focused on memory processes such as memory consolidation and retrieval, very little is known about memory storage" -- scientific paper (link).
  • "While LTP is assumed to be the neural correlate of learning and memory, no conclusive evidence has been produced to substantiate that when an organism learns LTP occurs in that organism’s brain or brain correlate."  -- PhD thesis of a scientist, 2007 (link). 
  • "Memory retrieval is even more mysterious than storage. When I ask if you know Alex Ritchie, the answer is immediately obvious to you, and there is no good theory to explain how memory retrieval can happen so quickly." -- Neuroscientist David Eagleman.
  • "How could that encoded information be retrieved and transcribed from the enduring structure into the transient signals that carry that same information to the computational machinery that acts on the information?....In the voluminous contemporary literature on the neurobiology of memory, there is no discussion of these questions."  ---  Neuroscientists C. R. Gallistel and Adam Philip King, "Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience,"  preface. 
  • "The very first thing that any computer scientist would want to know about a computer is how it writes to memory and reads from memory....Yet we do not really know how this most foundational element of computation is implemented in the brain."  -- Noam Chomsky and Robert C. Berwick, "Why Only Us? Language and Evolution," page 50
  • "When we are looking for a mechanism that implements a read/write memory in the nervous system, looking at synaptic strength and connectivity patterns might be misleading for many reasons...Tentative evidence for the (classical) cognitive scientists' reservations toward the synapse as the locus of memory in the brain has accumulated....Changes in synaptic strength are not directly related to storage of new information in memory....The rate of synaptic turnover in absence of learning is actually so high that the newly formed connections (which supposedly encode the new memory) will have vanished in due time. It is worth noticing that these findings actually are to be expected when considering that synapses are made of proteins which are generally known to have a short lifetime...Synapses have been found to be constantly turning over in all parts of cortex that have been examined using two-photon microscopy so far...The synapse is probably an ill fit when looking for a basic memory mechanism in the nervous system." -- Scientist Patrick C. Trettenbrein, "The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift? (link).
  • "Most neuroscientists believe that memories are encoded by changing the strength of synaptic connections between neurons....Nevertheless, the question of whether memories are stored locally at synapses remains a point of contention. Some cognitive neuroscientists have argued that for the brain to work as a computational device, it must have the equivalent of a read/write memory and the synapse is far too complex to serve this purpose (Gaallistel and King, 2009Trettenbrein, 2016). While it is conceptually simple for computers to store synaptic weights digitally using their read/write capabilities during deep learning, for biological systems no realistic biological mechanism has yet been proposed, or in my opinion could be envisioned, that would decode symbolic information in a series of molecular switches (Gaallistel and King, 2009) and then transform this information into specific synaptic weights." -- Neuroscientist Wayne S. Sossin (link).
  • "We take up the question that will have been pressing on the minds of many readers ever since it became clear that we are profoundly skeptical about the hypothesis that the physical basis of memory is some form of synaptic plasticity, the only hypothesis that has ever been seriously considered by the neuroscience community. The obvious question is: Well, if it’s not synaptic plasticity, what is it? Here, we refuse to be drawn. We do not think we know what the mechanism of an addressable read/write memory is, and we have no faith in our ability to conjecture a correct answer."  -- Neuroscientists C. R. Gallistel and Adam Philip King, "Memory and the Computational Brain Why Cognitive Science Will Transform Neuroscience."  page Xvi (preface)
  • "Current theories of synaptic plasticity and network activity cannot explain learning, memory, and cognition."  -- Neuroscientist Hessameddin AkhlaghpourÆš (link). 
  • "It remains unclear where and how prior knowledge is represented in the brain." -- A large team of scientists, 2025 (link). 
  • "How memory is stored in the brain is unknown." -- Research proposal abstract written by scientists, 2025 (link). 
  • "We don’t know how the brain stores anything, let alone words." -- Scientists David Poeppel and, William Idsardi, 2022 (link).
  • "If we believe that memories are made of patterns of synaptic connections sculpted by experience, and if we know, behaviorally, that motor memories last a lifetime, then how can we explain the fact that individual synaptic spines are constantly turning over and that aggregate synaptic strengths are constantly fluctuating? How can the memories outlast their putative constitutive components?" --Neuroscientists Emilio Bizzi and Robert Ajemian (link).
  • "After more than 70 years of research efforts by cognitive psychologists and neuroscientists, the question of where memory information is stored in the brain remains unresolved." -- Psychologist James Tee and engineering expert Desmond P. Taylor, "Where Is Memory Information Stored in the Brain?"
  • "There is no such thing as encoding a perception...There is no such thing as a neural code...Nothing that one might find in the brain could possibly be a representation of the fact that one was told that Hastings was fought in 1066." -- M. R.  Bennett, Professor of Physiology at the University of Sydney (link).
  • "No sense has been given to the idea of encoding or representing factual information in the neurons and synapses of the brain." -- M. R. Bennett, Professor of Physiology at the University of Sydney (link).
  • ""Despite over a hundred years of research, the cellular/molecular mechanisms underlying learning and memory are still not completely understood. Many hypotheses have been proposed, but there is no consensus for any of these."  -- Two scientists in a 2024 paper (link). 
  • "We have still not discovered the physical basis of memory, despite more than a century of efforts by many leading figures. Researchers searching for the physical basis of memory are looking for the wrong thing (the associative bond) in the wrong place (the synaptic junction), guided by an erroneous conception of what memory is and the role it plays in computation." --Neuroscientist C.R. Gallistel, "The Physical Basis of Memory," 2021.
  • "To name but a few examples, the formation of memories and the basis of conscious  perception, crossing  the threshold  of  awareness, the  interplay  of  electrical  and  molecular-biochemical mechanisms of signal transduction at synapses, the role of glial cells in signal transduction and metabolism, the role of different brain states in the life-long reorganization of the synaptic structure or  the mechanism of how  cell  assemblies  generate a  concrete  cognitive  function are  all important processes that remain to be characterized." -- "The coming decade of digital brain research, a 2023 paper co-authored by more than 100 neuroscientists, one confessing scientists don't understand how a brain could store memories. 
  • "The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’....We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not." -- Robert Epstein,  senior research psychologist, "The Empty Brain." 
  • "Despite recent advancements in identifying engram cells, our understanding of their regulatory and functional mechanisms remains in its infancy." -- Scientists claiming erroneously in 2024 that there have been recent advancements in identifying engram cells, but confessing there is no understanding of how they work (link).
  • "Study of the genetics of human memory is in its infancy though many genes have been investigated for their association to memory in humans and non-human animals."  -- Scientists in 2022 (link).
  • "The neurobiology of memory is still in its infancy." -- Scientist in 2020 (link). 
  • "The investigation of the neuroanatomical bases of semantic memory is in its infancy." -- 3 scientists, 2007 (link). 
  • "Currently, our knowledge pertaining to the neural construct of intelligence and memory is in its infancy." -- Scientists, 2011 (link). 
  •  "Very little is known about the underlying mechanisms for visual recognition memory."  -- two scientists (link). 
  • "Conclusive evidence that specific long-term memory formation relies on dendritic growth and structural synaptic changes has proven elusive. Connectionist models of memory based on this hypothesis are confronted with the so-called plasticity stability dilemma or catastrophic interference. Other fundamental limitations of these models are the feature binding problem, the speed of learning, the capacity of the memory, the localisation in time of an event and the problem of spatio-temporal pattern generation."  -- Two scientists in 2022 (link). 
  • "The mechanisms governing successful episodic memory formation, consolidation and retrieval remain elusive,"  - Bogdan Draganski, cogntive neuroscientist (link)
  • " The mechanisms underlying the formation and management of the memory traces are still poorly understood." -- Three scientists in 2023 (link). 
  • "The underlying electrophysiological processes underlying memory formation and retrieval in humans remains very poorly understood." --  A scientist in 2021 (link). 
  • "As for the explicit types of memory, the biological underpinning of this very long-lasting memory storage is not yet understood." -- Neuroscientist Cristina M. Alberini in a year 2025 paper (link). 

Sunday, April 5, 2026

There's No Neural Explanation for Creativity, Which Can Improve After Brain Damage

Neuroscientists have various tricks to try to fool us into thinking that there is some big relation between some aspect of the mind and some part of the brain. The major trick they have used is what I call the "lying with colors" trick. The trick works like this:

(1) The brains of a small number of people are scanned while the people were engaging in some cognitive activity. 

(2) It will be found that tiny regions of the brain may have a very slightly greater activity during such a cognitive activity, some difference such as 1 part in 200, something we would expect to occur from mere chance variations, even if the brain does not cause the particular cognitive activity. 

(3) A paper will then be published claiming that certain regions of the brain were "activated" during the type of cognitive activity. The claim will be misleading because all regions of the brain are continuously active throughout the day, with their neurons firing at a rate at about 1 time per second, or more (up to about 100 times per second). So it was not true that inactive brain regions suddenly became active when the cognitive activity was done. 

(4) The paper will include "lying with colors" brain visuals that do not correctly depict the tiny variations in activity. Instead of showing a difference of 1 part in 200 by something like a very slightly more red color corresponding to a 1 part in 200 difference, we will see the parts with the 1 part in 200 greater activity in red or yellow, surrounded by regions in black and white. 

I can give an example of such misleading visuals in brain scan studies, from a study trying to show evidence that brains produce creative thought. The first visual is from the paper "To create or to recall original ideas: Brain processes associated with the imagination of novel object uses." 42 subjects had their brains scanned while they were trying to think of novel uses for a familiar object such as a hat (an exercise in creative thinking). The sample size was much larger than typically used in studies such as these, which typically involve fewer than 15 subjects.  Figure 3 of the paper is shown below:

misleading brain scan visual

We have a region of the brain shown in orange, against a black and white background. The visual suggests the idea of some part of the brain "lighting up," becoming much more active. But no such thing happened. The line graph shows the reality. The "%SC" stands for percent signal change. The reported signal change is only one third of one percent, a signal change of merely .003. This is what we would expect from random fluctuations, even if brains have nothing to do with producing creative thought. In the left part of the visual above, a negligible difference is misleadingly depicted as if some big difference occurred. Most of the time this happens, you don't have the graph on the right, but only a misleading visual like the one on the left. 


We had in a recent neuroscience study and its press release another example of misleading claims by neuroscientists and their publicists.  Neuroscientists did a meta-analysis, analyzing different studies that had attempted to find a link between the brain and creativity. What they mainly found was a negative connection between creativity and what they called "the right frontal pole." But rather than candidly describing this as a negative association, the negative association was misleadingly described as "a brain circuit for creativity."

The Mass General Brigham press release had the misleading headline, "Researchers Identify a Brain Circuit for Creativity."  We have this sentence in the press release, which lets us know that the so-called "brain circuit for creativity" is actually some effect by which brain damage can cause increased creativity:  "By evaluating data from 857 participants across 36 fMRI studies, researchers identified a brain circuit for creativity and found people with brain injuries or neurodegenerative diseases that affect this circuit may have increased creativity." Got it? The sentence ends with "increased creativity," not "decreased creativity." So why on Earth is that being called "a brain circuit for creativity"?  That's like saying that being chained to a large anvil and dropped in a swimming pool is using "a metal device for swimming."

According to the paper here, brain scans of a woman showed "marked atrophy in bilateral temporopolar and frontal regions." The woman with brain shrinkage started acting oddly.  according to her relatives. But she developed an interest in art she had never had before.  Some samples of her art are given, some very good. 

The paper here refers to unilateral brain damage (on one side of the brain), reporting no change in creativity after such damage:

"Approximately 50 or so cases with unilateral brain damage (largely in one side of the brain, and where the etiology is commonly stroke or tumor) have by now been described in the neurological literature (Rose, 2004; Bogousslavsky and Boller, 2005; Zaidel, 2005, 2013a,c; Finger et al., 2013; Mazzucchi et al., 2013; Piechowski-Jozwiak and Bogousslavsky, 2013).

The key questions concern post-damage alterations in creativity, as well as loss of talent, or skill. A review of the majority of these neurological cases suggests that, on the whole, they go on producing art, sometimes prolifically, despite the damage’s laterality or localization (Zaidel, 2005). Importantly, post-damage output has revealed that their creativity does not increase, nor diminish (Zaidel, 2005, 2010, 2013b)."

Referring to the brain-damaging disease Parkinson's Disease (PD), the paper here says, "De novo artistic drive has also been reported in PD patients that had not ever shown any interest in art making whatsoever." 

The paper here ("Frontal lobe neurology and the creative mind") refers to patients with frontotemporal dementia (FTD). We read of what sounds like the disease caused more of an increase in artistic creativity then a decrease. We read, "All reported patients with temporal FTD (n = 19) presented the emergence (n = 11), increase (n = 2), or preservation (n = 6) of creative production but no degradation of artistic abilities (Miller et al., 19961998Edwards-Lee et al., 1997Drago et al., 2006bWu et al., 2013). Most case reports on behavioral variant FTD (n = 10) noted the emergence (n = 4), increase (n = 4), or preservation (n = 1) of artistic abilities (Miller et al., 1998Thomas-Anterion et al., 2002Mendez and Perryman, 2003Serrano et al., 2005Liu et al., 2009Thomas-Anterion, 2009). The effects of Alzheimer's disease on artistic production were more heterogeneous, with observations of both increase (Fornazzari, 2005Chakravarty, 2011) and degradation (Cummings and Zarit, 1987Crutch et al., 2001Serrano et al., 2005van Buren et al., 2013)."

Page 100 of the document here states this: "For example, Miller et al. (1996) reported several stroke patients who suffered damage to the left temporal hemisphere and dorsolateral prefrontal and parietal regions, and who developed sudden artistic abilities (see also Cela-Conde et al., 2011; Husslein-Arco & Koja, 2010; Midorikawa et al., 2008; Miller & Hou, 2004). "

The results reported are consistent with the idea that your brain is not the source of your creative thoughts or your creative impulses. Brains are involved in muscle activity, so if you do a brain scan of someone drawing or writing a new short story, the brain scan will pick up some higher activity associated with the muscle movement. But do a brain scan of a large sample of people engaging in creative thought without moving a muscle, and you will get unimpressive results like those shown in the first image of this post, results like those we might expect to get even if brains have nothing to do with creativity. 

I can give a personal account related to the topic of this post. First a prefatory comment: human brain volume supposedly shrinks by about 5% per decade, with brain connections becoming weaker and slower as we age, and various types of inflammation, structural damage, signal slowing and neural atrophy increasing in older brains.

 Between the ages of 22 and 26 I was trying very hard to produce creative output. During most of those years I held a job as a night watchman, mainly so that I could work on various literary projects during the ample free time that a typical night watchman has. But during those years my creative output was weak, even though scientists say that during these years the number of human brain cells is at its highest level and the brain is the healthiest. 

After working decades as a software developer, around 2013 I retired from programming work,  and devoted myself full-time to blogging. I won't tell you my exact age but I am at an age at which brain cell numbers are supposedly much less than during a person's early twenties. But instead of experiencing less creativity, I find myself these days as creative as I have ever been. Last month I got 150,000 page views of my blogs (blogs I do not advertise). I am so creative these days that the total number of posts I have auto-scheduled for future publication is in the hundreds; and all of these posts I wrote myself without using AI. (These posts will appear on my blogs even if I die soon.) So although I probably have much fewer brain cells than in my twenties, and have a brain much more clogged up and more plaque-tangled and subpar than during my twenties, my creativity these days seems much greater than in my early twenties.  This is the opposite of what we would expect under "brains make minds" assumptions.