Humans have astonishing capabilities for recognizing many different types of things: faces, individual words, quotations, places, musical compositions, and so forth. There is no credible neural explanation for how recognition occurs. There is no robust evidence for any neural correlate of recognition. Brains do not look or act any different when you are recognizing something. For example:
- The year 2000 study "Dissociating State and Item Componentsof Recognition Memory Using fMRI" found no difference in brain signals of more than 1 part in 100, with almost all of the charted differences being only about 1 part in 500.
- The study "Remembrance of Odors Past: Human Olfactory Cortex in Cross-Modal Recognition Memory" found no difference in brain signals of more than 1 part in 200.
- The study "Neural correlates of auditory recognition under full and divided attention in younger and older adults" found no difference in brain signals of more than 1 part in 500.
- The study "Neural Correlates of True Memory, False Memory, and Deception" asked people to make a judgment of whether they recognized words, some of which they had been asked to study. The study found no difference in brain signals of more than about 1 part in 300.
- The study "The Neural Correlates of Recollection: Hippocampal Activation Declines as Episodic Memory Fades" was one in which "participants performed a recognition task at both a short (10-min) and long (1-week) study-test delay." The study found no difference in brain signals of more than about 1 part in 300.
- The study "The neural correlates of everyday recognition memory" found no difference in brain signals of more than about 1 part in 500.
- The study "Neural correlates of audio‐visual object recognition: Effects of implicit spatial congruency" was one in which participants attempted a recognition task. The study found no difference in brain signals of more than about 1 part in 200.
Some have claimed that there is something in the brain called a "fusiform face area" that is more active when you are recognizing faces. Such a claim is not well-founded, for reasons I discuss in my post here.
But some claim there is some theoretical basis for a little understanding of how a brain could recognize something. For example, the recent paper "Computational models of learning and synaptic plasticity" by neuroscientist Danil Tyulmankov is one of numerous pieces attempting to claim that computer science work provides models shedding insight on how a brain might learn. Such claims are unfounded because of the vast physical differences between what is going on in brains and what goes on inside computers.
On page 7 of his paper Danil Tyulmankov gives us a typical example of someone trying to trick us into thinking that some computer software technique has some relevance to explaining how a brain could recognize something. Under a heading of "Memory paradigms" and subheadings of "Recall" and "Associative Memory" he states this:
"The colloquial use of 'memory' commonly refers to declarative memory (also called explicit memory) –
the storage of facts (semantic memory) or experiences (episodic memory) – which requires intentional
conscious recall. One of the most influential models of recall is the associative memory network
(Figure 1a), also known as the Hopfield network (Hopfield, 1982). The model’s objective is to store
a set of items ... such that when a perturbed version ... of one of the items
is presented, the network retrieves the stored item that is most similar to it. For example, given
a series of images, as well as a prompt where one of the images is partially obscured, the network
would be able to reconstruct the full image. More abstractly, given a series of lived experiences, this
may correspond to a verbal prompt to recall a piece of semantic or autobiographical information."
We have here the typical shenanigans of one of the persons trying to conflate human memory and computer memory, something made rather easy by the fortunate happenstance that the same word ("memory") is used for two completely different things (human memory and computer memory). Tyulmankov has given us above a paragraph that starts out with a sentence referring to human memory; he then refers to a purely computer software method with no relevance to human memory; and he then ends the paragraph with another sentence referring only to human memory. It's kind of like someone trying to make you get the impression that the president of the USA is a dog, by having the first sentence of his paragraph referring to dogs, having the second and third sentence of his paragraph referring to the president of the USA, and then having the last sentence of his paragraph again referring to dogs.
Let me explain some reasons why Hopfield networks do nothing to explain how a human could remember or recognize anything. Hopfield networks are groups of nodes in which each node has a connection to each of the other nodes in the group. The diagram below illustrates a very simple Hopfield network. Each of the circles is called a node. A Hopfield network might have any number of nodes. In the Hopfield network, the different connections between the nodes might have different numerical values called "strengths."
Now, if you search on the Internet, you can find various examples of
programming code that uses Hopfield networks to store and retrieve information. Sometimes while giving such examples, it is claimed that the code has some relevance to explaining how a brain could remember something. We are sometimes told that Hopfield networks have some relevance to the brain, because just as individual neurons in the brain can each be connected to many other neurons because of synaptic connections, each node in a Hopfield network is connected to each node in the network. To play up the similarity, the nodes of a Hopfield network are sometimes called "neurons," even though such a term is profoundly misleading, because of reasons I will explain below. There are, however, very strong reasons why Hopfield networks have no relevance to explaining how a brain could remember something. They are listed in my visual above. I will explain each.
Reason #1: Neurons do not have any capacity for storing some learned piece of information such as an image, a number or a word.
In a Hopfield network particular nodes of the network may store some item of information. But a neuron does not have any capacity that we know of for storing some item of learned information. No one has ever found an item of learned information by examining a neuron. Very much tissue has been extracted from the brains of living people, and no one ever found in a neuron something like the letter "A" or the word "cat" or the number "1776." No one has ever found even a single number such as 0 or 1 stored within a neuron.
Neurons also have no ability to function as binary switches similar to the light switches controlling whether a light is on or off. A neuron fires at a varying rate, with very much variation from one minute to the next. There is nothing in a neuron that flips between a permanent "off" state and a permanent "on" state. So even attempts to depict individual neurons as storing a value of 0 or 1 are invalid. Neurons are not like binary switches.
The page here provides code for a Hopfield network, using the term "neuron" to describe the nodes of the network. It states, "Each neuron in the network represents a binary unit that can have a state of either +1 (active) or −1 (inactive)." That does not correspond to the physical reality of neurons over any long time scale. Over the course of a few seconds, a neuron can switch between between being active and inactive. But over a time span such as days, neurons do not switch between some active state and an inactive state. All neurons are electrically active over a time span of 24 hours. So it is not accurate to imagine some situation persisting over a long time in which one neuron corresponds to a 0, and another neuron corresponds to a 1. Neurons fire at a rate between 1 time per second and 200 times per second, and such firing rates vary unpredictably.
So as simple a storage task as the storage of the word "dog" cannot occur through some method like that imagined above. The word "DOG" corresponds to the ASCII numbers 68, 79 and 71, and those three digits correspond to the binary sequence 10100111111101100011. But we can imagine no group of about 20 neurons storing the binary sequence 0100111111101100011 over a long period such as months, because there can be no situation in which some neurons are inactive over a period of months (corresponding to 0) while other neurons are active over months (corresponding to 1). All neurons are continually active, and neurons do not have any switch-like feature that could enable binary information storage. Plus there's the fact that the brain has no such thing as an ASCII chart allowing a conversion between letters of the English alphabet and decimal numbers.
Reason #2: Unlike Hopfield networks in computer software, the connections between neurons are noisy and unreliable
Some programming code using Hopfield networks will typically rely on a simple retrieval procedure in which information is extracted across the network with 100% reliability. That does not correspond to the situation in the brain. Almost all connections in the brain require signals passing across chemical synapses. But chemical synapses do not reliably transmit signals across synapses. Scientific papers say that each time a signal is transmitted across a chemical synapse, it is transmitted with a reliability of 50% or less. A paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."
What this means is that computer programs using a Hopfield network to retrieve information are not realistically simulating the brain. Were you to modify such programs to realistically simulate the unreliable synaptic transmission in the brain, such programs would no longer be able to achieve their functions of information retrieval or recognition.
Reason #3: A group of neurons is a "fuzzy boundary" thing that does not make a closed network that can be traversed from beginning to end
To understand this reason, let us look at how neurons are arranged in the brain. A typical neuron has very many synapses that connect it to other neurons. It has been estimated that the brain has about 100 billion neurons, and about 100 trillion synapses. This means the average neuron has about 1000 synapses, each of which is a connection between that neuron and other neurons. All those synapses lock a neuron in place at a particular location, just as the roots of a tree in a dense forest lock that tree into a particular location in the forest.
The visual below (from the site here) shows some neurons in the brain. The colors are artificial, supplied to show individual neurons.
When I search for information on the average distance between neurons, compared to the average size of a neuron, I am told (a) that the average size of the soma at the center of a neuron is about 10-25 micrometers (millions of a meter), and that the average distance between neurons is also about 25 micrometers. So neurons are densely packed in the brain, rather like in the artistic depiction below.
Now, there is a great problem with any spherical volume of neurons looking like the neurons above. The problem is that such a volume has no particular spot or neuron that is its beginning, and no particular spot or neuron that is its end. So the volume of neurons cannot be traversed from its beginning to end. For any particular neuron connected to about 1000 other neurons, there is no such thing as a "next neuron" and no such thing as a "previous neuron."
But a traversal from a beginning to an end is a crucial part of all programming that utilizes Hopfield networks. Traversal from a beginning to an end is crucial to the very idea of a Hopfield network. A Hopfield network does not correspond to a group of neurons, which has a fuzzy boundary and is not like a closed network with a beginning and an end.
Reason #4: Because of high levels of synaptic remodeling and the short lifetimes of synapse proteins ( < 4 weeks), the strength of connections between neurons rapidly vary randomly.
Hopfield networks include a "weight matrix" that is touted as something similar to the connection between neurons. But in such networks this "weight matrix" is a stable thing. That does not correspond to the connections between neurons, which are ever-varying in a random way.
Below is a quote from a scientific paper:
"A quantitative value has been attached to the synaptic turnover rate by Stettler et al (2006), who examined the appearance and disappearance of axonal boutons in the intact visual cortex in monkeys.. and found the turnover rate to be 7% per week which would give the average synapse a lifetime of a little over 3 months."
You can read Stettler's paper here. A 2019 paper documents a 16-day examination of synapses, finding "the dataset contained n = 320 stable synapses, n = 163 eliminated synapses and n = 134 formed synapses." That's about a 33% disappearance rate over a course of 16 days, suggesting an average synapse lifetime of less than three months.
You can google for “synaptic turnover rate” for more information. Synapses typically protrude out of bump-like structures on dendrites called dendritic spines. But those spines have lifetimes of less than 2 years. Dendritic spines last no more than about a month in the hippocampus, and less than two years in the cortex. This study found that dendritic spines in the hippocampus last for only about 30 days. This study found that dendritic spines in the hippocampus have a turnover of about 40% each 4 days. This 2002 study found that a subgroup of dendritic spines in the cortex of mice brains (the more long-lasting subgroup) have a half-life of only 120 days. A paper on dendritic spines in the neocortex says, "Spines that appear and persist are rare." While a 2009 paper tried to insinuate a link between dendritic spines and memory, its data showed how unstable dendritic spines are. Speaking of dendritic spines in the cortex, the paper found that "most daily formed spines have an average lifetime of ~1.5 days and a small fraction have an average lifetime of ~1–2 months," and told us that the fraction of dendritic spines lasting for more than a year was less than 1 percent. A 2018 paper has a graph showing a 5-day "survival fraction" of only about 30% for dendritic spines in the cortex. A 2014 paper found that only 3% of new spines in the cortex persist for more than 22 days. Speaking of dendritic spines, a 2007 paper says, "Most spines that appear in adult animals are transient, and the addition of stable spines and synapses is rare." A 2016 paper found a dendritic spine turnover rate in the neocortex of 4% every 2 days. A 2018 paper found only about 30% of new and existing dendritic spines in the cortex remaining after 16 days (Figure 4 in the paper).
Furthermore, it is known that the proteins existing between the two knobs of the synapse (the very proteins involved in synapse strengthening) are very short-lived, having average lifetimes of no more than a few days. A graduate student studying memory states it like this:
Furthermore, it is known that the proteins existing between the two knobs of the synapse (the very proteins involved in synapse strengthening) are very short-lived, having average lifetimes of no more than a few days. A graduate student studying memory states it like this:
"It’s long been thought that memories are maintained by the strengthening of synapses, but we know that the proteins involved in that strengthening are very unstable. They turn over on the scale of hours to, at most, a few days."
A scientific paper states the same thing:
Experience-dependent behavioral memories can last a lifetime, whereas even a long-lived protein or mRNA molecule has a half-life of around 24 hrs. Thus, the constituent molecules that subserve the maintenance of a memory will have completely turned over, i.e. have been broken down and resynthesized, over the course of about 1 week.
The paper cited above also states this (page 6):
"The mutually opposing effects of LTP and LTD further add to the eventual disappearance of the memory maintained in the form of synaptic strengths. Successive events of LTP and LTD, occurring in diverse and unrelated contexts, counteract and overwrite each other and will, as time goes by, tend to obliterate old patterns of synaptic weights, covering them with layers of new ones. Once again, we are led to the conclusion that the pattern of synaptic strengths cannot be relied upon to preserve, for instance, childhood memories."
A paper on the lifetime of synapse proteins is the June 2018 paper “Local and global influences on protein turnover in neurons and glia.” The paper starts out by noting that one earlier 2010 study found that the average half-life of brain proteins was about 9 days, and that a 2013 study found that the average half-life of brain proteins was about 5 days. The study then notes in Figure 3 that the average half-life of a synapse protein is only about 5 days, and that all of the main types of brain proteins (such as nucleus, mitochondrion, etc.) have half-lives of 15 days or less. The 2018 study here precisely measured the lifetimes of more than 3000 brain proteins from all over the brain, and found not a single one with a lifetime of more than 75 days (figure 2 shows the average protein lifetime was only 11 days).
The paper here states, "Experiments indicate in absence of activity average life times ranging from minutes for immature synapses to two months for mature ones with large weights."
When you think about synapses, visualize the edge of a seashore. Just as writing in the sand is a completely unstable way to store information, long-term information cannot be held in synapses. The proteins that make up the synapses are turning over very rapidly (lasting no longer than a few weeks), and the entire synapse is replaced every few months or every several months. Conversely, humans can reliably remember things they learned or experienced 50 or 60 years ago; and humans can recognize songs, faces, names and quotes that they have not been exposed to in 50 years. For example, lying in bed the other day, there strangely popped into my mind the name "Tobie Tyler." I recognized the name as that of a circus movie involving a boy, one I had not seen or heard mentioned in well over half a century. A Google search confirmed this (I saw the movie around 1960).
Reason #5: There is no ability in the brain to read the strength of the synaptic connections between neurons.
In a Hopfield network as implemented in computer software code, there is an ability to read the strength of all of the connections between nodes. But the brain has no corresponding ability. The brain has nothing like a synapse strength reader.
Computer programmers take for granted certain conveniences. Every programmer knows that if he has a data structure named DS, he can run a loop something like the code below, to sum up the numbers stored in each part of such a data structure:
int nTotal = 0;
int i =0;
for (i = 0; i < DS.length; i++)
nTotal = nTotal + DS[i];
But while this type of thing is a basic convenience available in the world of programming, it does not correspond to anything possible in the brain. Physically, brains have no way to run loops performing some mathematical or summation operation on each neuron or synapse in a set of neurons or synapses. A brain cannot sum up the strengths of a set of synapses, nor can a brain even read the exact strength of some particular synapse. Similarly, your muscular system has muscles of various strengths; but there is in your body no such thing as a muscle strength reader; and there never occurs in your body anything like a loop that sums up all the strengths of the muscles in some part of your body.
In short, while having a small amount of superficial resemblance to the arrangement of neurons and synapses, Hopfield networks and the programing code that use them do not realistically simulate the realities of neurons and synapses. The ability of Hopfield networks to do certain tasks does nothing to show that the brain is capable of doing such tasks.
Below is a revealing confession by a neuroscientist named Slotine: "While neuroscience initially inspired key ideas in AI, the last 50 years of neuroscience research have had little influence on the field, and many modern AI algorithms have drifted away from neural analogies."
No comments:
Post a Comment