Thursday, April 5, 2018

Fancy New Technology Fails to Prove Memory Dogmas

If tortured sufficiently, data will confess to almost anything.
Fred Menger

Some will claim that fancy new technology such as optogenetics and alleged "mind-reading machines" help prove conventional dogmas about the brain, such as the dogma that your brain stores your memory, and the dogma that your brain generates your thoughts. I will explain in this post why such claims are unfounded.

Memory Experiments with Optogenetics

Contrary to claims sometimes made, scientists have no solid evidence for any such thing as a memory trace (also called an engram), a physical change in a particular part of the brain that corresponds to the storage of a memory. How memory works is a great mystery. There are some scientists that have claimed to have learned or discovered something about memory traces or engrams, but such claims are not well founded. An engram is a hypothetical group of cells that might store a memory.

Some have claimed that optogenetics experiments prove something about memory.  There is a general reason why such experiments prove nothing about human memory: all of the widely reported  optogenetics memory experiments so far have only been done with animals.  It is entirely possible that there is a fundamental difference between animal memory and human memory.

In 2013 we had an example of a very dubious scientific paper claiming to have found something relevant to this issue. Two scientists (Xu Liu and Steve Ramirez) claimed to have created a false memory in a mouse. Their paper was entitled “Inception of a false memory by optogenetic manipulation of a hippocampal memory engram.” Their claim was picked up by countless mainstream news sources, which failed to apply critical scrutiny to the very dubious claim. 

The experiment was done using some mice that were genetically engineered to be light sensitive.  An optical device was connected to part of their brains.  Mice were put into a box and given an electric shock. This, the scientists claimed, created a memory in part of the brains of the mice. 

Then later, when the mice were in a different room, light was transmitted over the optical device, into the brains of the mice. This, the scientists claimed, activated the memory that had been stored in the brain of the mice – supposedly because the mice “froze.” I may note that the use of the term “froze” and “freezing” in the study is kind of loaded terminology, a kind of non-objective “assuming what you are trying to prove” terminology. The correct objective way to describe a mouse that is not moving is to say the mouse was temporarily not moving. “Froze” is a loaded term specifically designed to get someone to think that a mouse stopped moving because of fear, but you can't tell how a mouse is feeling or what it is recalling merely by the fact that it stops moving.

We are told that this was an “implanted memory,” because the original memory was created not in the second box but the first box. This term is inaccurate. If an experiment like this were ever done in a convincing manner, the most it would demonstrate is an electronic activation of a memory. It is also inaccurate to describe the result as an inception or implant of a false memory (as their scientists did in their scientific paper). If I have a memory of being punched on Fifth Avenue, and then I recall or relive that memory while on Lexington Avenue, that is not a false memory. It is a true memory of something that happened in a different place.

A more serious objection to the research is that is did not provide convincing evidence of a memory activation or any type of unusual memory effect at all in the mice being studied. There are three reasons why I make this claim.

The first reason is that the number of mice tested was very small. When you read the paper, you will find the number of mice used was only about 6. That's way too small a sample size to be drawing any reliable conclusions. With a sample size that small, the results could easily have been due to coincidence. An experimenter wishing to show some particular effect could just keep trying until some round of experiments showed the desired effect. That would be hard to do with a large sample size, but easy to do with a very small sample size such as only 6 or 8 mice.  See this post for a discussion of five other optogenetic memory experiments that only used small sample sizes, less than the 15 animals per study group recommended for reliable experimental results. 

The second reason is that the conclusion about whether the memory was being remembered was presumably based on an observer judging whether a mouse froze, or stopped moving. The authors did not explain how it was determined that particular mice had “frozen,” and we can only assume that such a determination was reached from a subjective human judgment. Given the start-stop, helter-skelter way in which mice move, any judgment about whether a mouse froze is going to be a subjective judgment. So there is too much of a possibility of observational bias here, one in which an observer subjectively reports the effect he is hoping to find. Similarly, you might subjectively report that your goldfish in a goldfish bowl tends to move towards you when you are looking into the bowl, but that would probably tell us more about your desire to see something than about the goldfish.

The third reason is that the freezing effect could have been produced not by a recall of memories, but by the very fact that the energy was being transmitted into the brain of the mice. Imagine you are running along, and suddenly a scientist switches on some weird thing that causes some energy to pour into your brain. This all by itself might cause you to stop, even if it didn't cause you to recall some memory that caused you to stop. What could have been going on in the mice was just a kind of pausing effect caused by a novel stimulus rather than a recalled fear effect. A science paper says that it is possible to induce freezing in rodents by stimulating a wide variety of regions. It says, "It is possible to induce freezing by activating a variety of brain areas and projections, including the hippocampus (Liu et al., 2012), lateral, basal and central amygdala (Ciocchi et al., 2010); Johansen et al., 2010Gore et al., 2015a), periaqueductal gray (Tovote et al., 2016), motor and primary sensory cortices (Kass et al., 2013), prefrontal projections (Rajasethupathy et al., 2015) and retrosplenial cortex (Cowansage et al., 2014).”

We have no idea what was going on in the minds of these mice. It is not sound to assume that a mouse is “frozen in fear” merely because it stops moving, or to assume that the mouse is remembering something when it stops moving. We have no way of knowing what mice are remembering at any particular moments. We can also ask: why didn't the scientists try to use dissection to confirm their claim of a memory stored in some particular spot of the brain? The technique would be simple: train a mouse to fear some particular stimulus, then dissect some little part of the brain where you think the memory is stored, and see whether the mouse still fears the stimulus. 

A more recent paper by Ramirez and Liu was published in Nature, and was entitled, “Activating positive memory engrams suppresses depression-like behaviour.” But the paper shows the same type of methodological problems of their earlier paper. Figure 2 of the paper says that in one group there were only 6 mice used and 6 mice for the control group, and elsewhere the paper states that a control group had only 3 mice. These sizes are way below the 15 animals per study group (control and non-control) recommended for a reliable experimental result.  The authors claim to have counted differences in the degree to which mice “struggled” when presented with a maze – again something involving a subjective interpretation in which a researcher might tend to see whatever he wants to see. The authors' interpretation of what is going on is speculative. The authors do not present any solid evidence that they actually activated a memory by optogenetic stimulation.

But to its credit, Nature did publish an article entitled “Brain-manipulation studies may produce spurious links to behaviour,” pointing out that shooting light into one part of a brain (the technique used by Ramirez and Liu) may cause other parts of the brain to fire off, resulting in unpredictable effects. “Manipulating brain circuits with light and drugs can cause ripple effects that could muddy experimental results,” the article cautions. That's another reason for doubting these mouse memory studies based on optogenetic brain stimulation, since it undermines the whole simplistic idea of “stimulate just this area and activate just this memory.”

The 2019 study here by Ramirez and others is the latest example of an unconvincing study trying to use optogenetics to show some evidence of memories being stored in a brain.  There are two big reasons why the study shows nothing of the sort:
(1) The study uses a technique in which animals are trained to fear some stimulus, and are then subjected to a brain "cell reactivation" that can be roughly described as a brain zapping.  The animals supposedly froze more often when this brain zapping happening, and the study interpreted this behavior as evidence of an artificially produced memory recall of a fear memory. But such a technique does nothing to show that a memory is being recalled, because it is well known that there are many parts of a mouse brain that will cause freezing behavior when artificially stimulated.  The freezing behavior is probably a result of the strange stimulus, and not actual evidence of memory recall.  If you were walking along, you would also freeze if someone turned on some brain-zapping chip implanted in your brain. 
(2) The study uses sample sizes so small that there is a very high chance of a false alarm.  The number of animals per study group was only 10 to 12. But 15 animals per study group is the minimum needed for a modestly convincing result, and a neuroscientist has stated that to get a decent statistical power of .5, animal studies should be using at least 31 animals per study group. 

The second problem is one that is epidemic in modern neuroscience.  Neuroscientists are well aware that the sample sizes typically used in neuroscience studies (the number of animals per study group) are so low that there must be a very high chance of false alarms in very many or most of their experimental studies; but they continue year after year producing such unreliable studies, and foisting them on the public as evidence of things that neuroscientists want to believe in.  There is a "publication quota" expectation that provides a strong incentive for such professional malpractice. 

The “mouse memory implant” research described above is inconsistent with a body of memory research produced over a much larger period of time: the memory research of Karl Spencer Lashley. Over many years, Lashley did extensive research in which he tested how memory and learning is affected when you take out various parts of an animal's brain. In one extensive set of experiments, Lashley trained rats to run a maze. The rats then had parts of their brains removed. Lashley found the rats were able to run the maze just as well regardless of which part of the brain was removed. Strongly indicating that particular memories are not localized in one particular part of the brain, this research directly contradicts the “mouse memory implant” work that tried to suggest that a memory was stored in one particular part of the brain.

Lashley tested using three types of mazes of varying difficulty. Astonishingly, Lashley found that you could remove half of a rat's brain, and it had very little effect on the rats ability to remember either of the two simpler types of mazes.

Here are some startling results listed by Lashley (and discussed here):
  1. Rats, trained to have a differential reaction to light, showed no reduction in accuracy of performance when the entire motor cortex of the brain, along with the frontal poles of the brain, was removed.
  2. Monkeys were trained to open various latch boxes. The entire motor areas of the monkeys' brains were removed. After 8 to 12 weeks of paralysis, during which they had no access to the latch boxes, the monkeys were then able to open the boxes “promptly” and “without random exploratory movements.”
  3. Rats were trained to solve mazes, and the rats then had incisions made separating different parts of their brains. This produced no effect in memory retention.
  4. Monkeys were trained to unlatch latch boxes. After having their prefrontal cortex removed, there was “perfect retention of the manipulative habits.”
  5. A number of experiments with rats have shown that habits of visual discrimination survive the destruction of any part of the cerebral cortex except the primary visual projection area.”

After discussing these and many other experiments he did for many years, Lashley said this about the idea of an engram or memory trace: “It is not possible to demonstrate the isolated localization of a memory trace anywhere within the nervous system.”

Lashley's research is completely inconsistent with the research claim of Ramirez and Liu. Lashley's research provides compelling evidence that particular memories are not stored in particular parts of a brain. Conducted over more than 30 years with a huge number of animals, Lashley's research was many times more extensive than the scanty 6-mouse research of Ramirez and Liu that got so much press coverage. Given a conflict between the two lines of research, we should believe Lashley's research, which is so much more voluminous. Contrary to the claims of some optogenetic researchers using dubious methodology, there is no compelling evidence that particular memories are stored in particular parts of the brain, and no convincing evidence that specific memories can be recreated by stimulating particular parts of the brain. There is no good evidence for any such thing as a memory engram, a particular set of cells that stores a particular memory. Lashley's many years of research strongly indicates that such ideas are not valid, as does the research of John Lorber (who, as described here,  documented many cases of humans who functioned very well, despite having most of their brains destroyed through disease). 

In 2014 our credulous and exaggeration-prone news media reported that researchers Wiltgen and Tanaka had erased specific memories in a mouse. But the reports were based on a research paper that justified no such conclusions.  Figure 2 and Figure 3 of the paper shows that the experimenters used only 6 mice for two of the experiments. That's way too small a sample size to produce reliable evidence of an effect. The standard is that you are supposed to use at least 15 animals in each study group to get a reliable evidence of an effect. So the paper gave no clear evidence of having erased any memory in a mouse.  The paper had some of the same methodological problems as discussed above, such as relying on judgments of a mouse's "freezing rate" that is very hard to objectively quantify. 

Neuroscientist Mark Humphries has written a relevant article called "Some limits on interpreting causality in neuroscience experiments." Using the term "supernatural region" to mean an artificially created brain state not corresponding to a natural brain state of an organism, he states the following:


In optogenetics experiments, we turn on a bunch of neurons at the same time, and often hold them on for seconds at a time. Or we turn off a bunch of neurons at the same time, and hold them off for seconds at a time. This is very, very far from a natural region for any bunch of neurons we could name...So we have a fundamental limit to testing causality in the brain: we always push our neurons into the supernatural region, so we can never be sure that what we observe as a behavioural consequence is naturally causal.

There is no reliable basis for concluding that a memory was evoked because a mouse froze when its brain was optogenetically zapped to reach such a "supernatural" state. 

The Myth of the Mind-Reading Machine

In the British tabloid the Sun there's a prime example of a bunk and bogus reporting of a scientific study. The headline says, “Mind-reading machine can now translate your thoughts to text immediately by interpreting brain activity.” The text of the article is carefully worded to make you think that such a mind-reading machine was invented.

We are told this machine was “detailed in the Journal of Neural Engineering” and that the study leader was David Moses. There's no link to the study, but when I searched for such a study, I found it. The paper in the Journal of Neural Engineering co-authored by David Moses is entitled, “Real-time classification of auditory sentences using evoked cortical activity in humans.”

The abstract describes the study as follows:

Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data.

This isn't mind-reading at all. It's auditory perception classification, and to only a very limited extent. Two people listened to the same ten sentences being spoken over and over again, while their heads were hooked up to equipment that monitored electrical signals from their brain. Some software used these readings to make guesses about which of these ten sentences were later spoken to the people. This type of guessing is not any type of thought reading. Something that you are hearing is not something that you are thinking. When you hear something, that's a perception, not a thought.

The Sun news story has a phony-baloney infographic telling us that in this experiment “thoughts appear on screen as words,” but no such thing actually occurred. The story has been repeated, with similar inaccuracies, by other sources such as the Daily Mail.

What we have here is a stunt of no obvious usefulness. It's hard to foresee how anything useful might come out of being able to predict which sentence a person is listening to by reading traces of auditory perception in his mind. The study raises ethical concerns. Did Moses and his team implant electrodes in two people's brains (presumably something that might risk brain damage) to achieve this unimportant result? 

A similar bunk story appeared in 2016, with the headline “Scientists Have Invented a Mind-Reading Machine That Visualizes Your Thoughts.” The actual activity was based on analyzing brain activity during visual perceptions, and did not involve any actual reading of thoughts (although it may have exploited perceptual after-images, as discussed below).

A 60 Minutes segment on an alleged "mind-reading machine" is preserved on a youtube.com video entitled “Mind Reading Machine on CBS Reads Your Thoughts.” But it isn't a reading of thoughts. The video shows people hooked up to a brain scanner. The people are shown viewing pictures of one of ten different objects, The machine then predicts from patterns in their visual cortex which of the ten things they were looking at. This is perception prediction, not a reading of thoughts.

In some of these experiments, the experimenters may be exploiting a kind of brief after-effect where traces of something you just saw linger for a few seconds in the visual cortex. Imagine if I hook someone up to a brain scanner, and show them a picture of a wrench for five seconds, and then ask them to close their eyes for five seconds and think of what he just saw. It is entirely possible that the parts of the visual cortex activated by such a perception will show traces that linger for a few seconds. We seem to see this in after-image optical illusions. An example of one is below. If you look at this image for 30 seconds, and then close your eyes, you should be able to still see one of the bars for a few seconds, as a kind of ghostly bar in your mind's eye. You can find many other examples of after-image optical illusions by doing a Google image search for "afterimage." 


We can imagine how an experimenter might exploit such an effect. He might hook a subject up to a brain scanner, and ask the subject to stare at an image for 30 seconds, and then close his eyes and think about the image just seen. The brain scanner might then scan the person's brain for a few seconds, and be able to predict which of 7 images the person saw, based on what was seen in the brain when the subject's eyes were closed. The experimenter might then encourage people to think such a thing was mind-reading. But this is not thought-reading. The scanner is just picking up a perceptual after-image. This is probably what is going on in the 60 Minutes video.

If an experiment like the phony description in the Sun story had actually occurred, it would be a monumental breakthrough of the greatest interest to every philosopher of mind. It would tell us that thoughts are actually generated by brains, an idea which has never been proven. There are good reasons for doubting such an idea. Among these are the fact that we have no understanding at all of how a brain could generate a thought or an abstract idea. As discussed here, attempts to search for a neuroscience explanation for how a brain could generate a thought results in a spectrum of incoherence that doesn't add up to anything.  Thoughts are mental things, so how could they possibly be generated by merely physical things like brains?  That would be rather like blood pouring out of a stone. 

Other very dubious stories in the press include one that memory can be enhanced by electrical stimulation. One headline says, “Electric pulses to the brain can improve memory as much as 15 per cent, finds study.” Such a result is unimpressive. An experimenter could show a 15 percent increase in memory retention when people held a rabbit's foot in their hands. The experimenter need merely try 20 or 30 tests, and then submit for publication whichever one produced the best performance, taking advantage of random variations.

There is no such thing as a pure memory test, since every memory test is a test of perception, concentration, and memory. It is easy to imagine how some meaningless brain stimulus might cause someone to do a little better in a memory test. Suppose you do an experiment in which you first have a subject try to memorize things under normal conditions, and then have the subject try to memorize things while some fancy brain gizmo is attached to his head. Let's imagine the brain gizmo doesn't actually do anything except give the reader a little buzzing effect. It's entirely possible that this will produce a 10 percent or 15 percent performance improvement that is purely the result of a kind of power of suggestion and expectation. The subject may kind of have the feeling that when he's wearing the brain gizmo, this is when he is expected to perform really well; so he may simply concentrate a little harder while wearing the brain gizmo. A minor difference in concentration could easily account for a 15 percent difference in performance.

It is also possible that such a minor difference in performance is simply a result of a kind of placebo effect. The power of the placebo effect is well documented. If a doctor in a white coat gives a man some sugar pill, and tells him this is a powerful cure for his ailment, the patient will very often report that the pill was effective. We don't understand why this happens again and again, and it may be a mysterious type of mind over matter. It is easily possible that such a placebo effect can also come into play in a memory test. Hook someone up to some fancy brain gizmo and test his memory, and he may perform a little better. The result may have nothing to do with the gizmo, but may be simply the person performing a little better because he believed something had been done to make him perform better, with the result being a kind of placebo effect. 

One recent experiment (dubiously billed as a test of a "memory prosthesis") involved 22 patients with brain electrodes who 100 times chose a particular visual image from a group of images. The brain electrodes recorded electrical activity in the hippocampus. Then some of these signals were played back in a later test in which patients had to pick the original image from a set of 3.  This reportedly producing a 40% increase in "memory recall" involving that particular image.  But it is known that the hippocampus has a role in visual perception, as this long scientific paper tells us. So what could have been going on here is that some of the visual perception mechanism of the brain was captured and replayed.  Such a result can be explained without any assumption that anything is going on involving memory.  We could have a bit of "perception playback" rather than enhanced memory recall. 

I can imagine a type of experiment which to the best of my knowledge has never succeeded. A person would be hooked up to a brain scanner, and would then try to concentrate very hard on one of 7 things which he had not recently seen, such as an apple, a banana, a blue ring, and so forth. Software connected to the scanner would then try and guess which of the things the person was thinking about. If the prediction was successful, would this prove that the brain is generating such ideas? Not at all. It would merely suggest that the visual cortex used by the brain in vision can be leveraged when someone is trying hard to visualize something. A mind's eye image of something is not necessarily the idea of a thing. You may first have the idea of Marilyn Monroe in a bikini, and then concentrate hard to kind of visualize that, fleshing out some details (such as imagining a particular bikini color).  If we use a little of the visual cortex when making a vivid visualization in our minds, that does not prove the preceding idea came from our brains. 

The tabloid Daily Mail has a story about a "mind-reading headset" that has "90 percent accuracy." It has nothing to do with the brain, however. It turns out that if you try hard to speak a word in your mind, so you "can hear it loudly" in your mind's ear,  you usually inadvertently use a little muscle to do that. You can prove this by trying to "silently shout" the word "banana" in your mind while holding your neck -- you should feel a little muscle movement. So the "mind-reading headset" is merely working off such a thing. And the "90% accuracy" is only for a few words that it's been trained on.  This tells us nothing about whether your brain is creating thoughts.  

You will no doubt continue to see quite a few "mind-reading machine" stories in the news, even though no one will actually create such machines. The rule on the web these days is "clicks=cash." The more you click on a link to sensational-sounding science stories, the more advertising revenue the web site makes from ads on the site.  With such a situation, there is a great incentive for careless exaggeration of science and technology developments. 

Postscript: In April 2019 we had a news story with the headline "Synthetic speech generated from brain recordings." But it wasn't at all a case of generating speech from mere neural activity recordings of someone thinking words in his mind. The neural activity recordings were taken while people were speaking aloud. Also, the fancy system also used an input of vocal recordings of what the people said. So a more accurate headline would have been "Synthetic speech generated from brain recordings and tape recordings of speech." We cannot tell from this whether the system would have failed if it only used the neural activity recordings. Such a system is not evidence that thought comes from the brain, although it could be evidence the brain helps you to move your mouth muscles. 

No comments:

Post a Comment