Monday, November 25, 2024

The Brains of 60 Subjects Seemed to Look the Same During Eyes Closed Mental Rest, Recall and Math Activity

The EEG is a device that can detect electrical activity from parts of the brain. When an EEG device is used, electrodes are placed next to different parts of the skull. The device will pick up a dozen or more different lines that show electrical activity in different parts of the brain. 

Brains have a great deal of signal noise, and the abundance of such noise is one of several major reasons for disbelieving that the brain is the source of human thinking and recall which can occur with incredible accuracy, such as when people perfectly recall very large bodies of text and perfectly perform extremely difficult math calculations without using tools such as computers, pencils or paper. The analysis of brain waves obtained by EEG devices is an area of science where bad methods, pareidolia and junk analysis is very abundant.  There is an abundance of people trying to use fancy statistical methods to try to extract identifiable "signals" or "signs" from data that is very noisy and polluted. Muscle movements abundantly contaminate EEG readings. 

A widely used publicly available dataset of EEG data is available on a site called Physionet. On a page entitled "EEG During Mental Arithmetic Tasks" it is possible to download EEG data for 36 subjects. The data includes EEG readings taken during "rest activity" and EEG readings taken when the subjects were told to perform mathematical operations.  The paper here ("Electroencephalograms during Mental Arithmetic Task Performance") describes how the data was gathered.  The data set is sometimes called the "EEG During Mental Arithmetic Tasks" or it may be called something like the "Physionet EEG mental arithmetic task dataset."

I don't recommend trying to download this data, because it uses some file format that your spreadsheet or text editor will not be able to understand.  But at the page here, we have some comments by a person who downloaded this data, and also downloaded a utility program that allows him to see the data represented as particular wavy lines. 

After showing us a picture showing one subject whose brain wave lines looked different when he was doing the math tasks, the writer states, "Other participants didn’t see much change at all while doing their tasks." By this he means that when he looks at the brain waves of such participants, they don't look different when the subjects were doing the math tasks (compared to when they were resting). The writer also states, "In fact, some data looked like the brain had more activity while doing nothing at all."  We see one visual with brain wave lines showing "baseline" activity for Subject 15, and another visual showing brain wave lines during that subject's performance of math tasks. The first visual shows wavy lines that are a lot wavier that the second visual, contrary to the idea that mental activity would involve more active brain waves. 

You can read some scientific papers written by scientists that create algorithms or models that analyze data sets such as this, algorithms or models trying to detect whether a particular set of EEG readings was or was not taken when a patient was engaging in heavy thinking. A typical paper of this type will discuss several different algorithms or models the scientists tested. We may be told that the most successful algorithm had something like a 75% success rate in predicting whether a set of EEG readings were produced rest activity or thinking.  

Such a thing is unimpressive when you consider that the data set being used for testing is usually small. In many cases half of the patient data will be used to "train" the model, and the other half will be used to test the model. So maybe the data for only 8 or 10 patients will be used to test the model.  The odds of accidental success on guessing whether the person's mind was active or not (even if the model is worthless) are something like this (I used the StatTrek binomial probability calculator to calculate some of the odds):

Eight patients:

Chance of 8 guesses all correct = 2 to 8th power = 1/256.

Chance of 7 out of 8 guesses correct = .035

Chance of 6 out of 8 guesses correct = .014

Ten patients:

Chance of 10 guesses all correct = 2 to 10th power = 1/1024

Chance of 9 out of 10 guesses correct = .01

Chance of 8 out of 10 guesses correct = .05

Chance of 7 out of 10 guesses correct = .17

Now, with odds like these it means very little if some scientific paper says that it tried several different predictive models, and found that one of the models had a 70% predictive accuracy. You might rather easily get that level of success by pure chance, even if the model is worthless or if the "mental activity" scans have no identifying characteristics.  We must also remember here factors such as what is called publication bias and what is called the file drawer effect. Publication bias is that scientific journals tend to reject negative results, and accept for publication only papers reporting positive results. The file drawer effect is that scientists are free to try different things without publishing their failures, and without submitting failed attempts for publication. So a scientist who produces a slightly successful predictive model analyzing EEG data may have in his file drawers 40 failed attempts involving unsuccessful predictive models. Getting maybe a "70% successful" predictive model on the 20th or 30th try does not mean that the EEG data actually shows a difference when people are thinking versus when their minds are resting. 

Then there is the fact that the gathering of EEG data must be done very carefully for any data set that compares intensive mental activity with rest activity. Visual activity, muscle activity and stress can produce traces in EEG data.  So, for example, it might be easy to detect the difference between rest activity and mental activity if the subject is motionless and closes his eyes during rest activity, and the subject uses a keyboard to type answers during the mental activity.  In that case the difference would come from the fact that during the rest activity there is no use of the eyes and muscles, and during the mental task there is use of the eyes and muscles. 

The paper here ("Electroencephalograms during Mental Arithmetic Task Performance") describes some poor methods of gathering rest data and mental arithmetic data used to create the "EEG During Mental Arithmetic Tasks" data set that has been the basis of quite a few scientific papers.  We are told this:

"Mental arithmetic performance is considered as a standardized stress-inducing experimental protocol. Serial subtraction during 15 min is considered to be a psychosocial stress. In this way, our study design required intensive cognitive activity from the subjects. Intensive mental load is accompanied by a change in the emotional background when the subject makes additional effort to resolve tasks, so one can talk about evoked emotions in this case.During EEG recording, the participants sat in a dark soundproof chamber, comfortably reclined in an armchair. Prior to the experiment, participants were instructed to try to relax during the rest state and were informed about the arithmetic task—participants were asked to count mentally without speaking or using finger movements, accurately and quickly, in the rhythm they had determined. After 3 min of adaptation to experimental conditions, EEG registration of the rest state with closed eyes was made (over the next 3 min). Then the participants performed a mental arithmetic task—serial subtraction—for 4 min."

We are also told that the scientists kept only a subset of the original data gathered, throwing out about half of the data:

"Based on EEG visual inspection by a qualified electroneurophysiologist, 30 of the 66 initial participants were excluded from the database due to poor EEG quality (excessive number of oculographic and myographic artifacts), so the final sample size is 36 subjects."

It is easy to see how that could have gone wrong. The desire to get a set of EEGs with mental activity brain scan data looking during different from rest state brain scan data might have come into play, creating a bias in so subjective a selection of which subjects to keep. 

There's much gone wrong here. We have no description of a rest state which is a clear description of a lack of mental activity. Were the subjects hearing something told them during the rest state? That isn't a rest state. Did any of the subjects move during the rest state? That isn't a rest state. Were the subjects counting during the rest state? We can't even tell from the wording above. Did the subjects have their eyes closed when they were doing the mental subtractions? We don't know. Were the subjects disqualified if they violated the instructions by softly speaking as they counted backwards? Apparently not. The subjects were told to follow a rhythm during counting, an instruction which might have tended to produce sounds or motions such as tapping. The subjects were not told to be motionless, but merely told not to use their fingers (an instruction that would not exclude arm movements or foot tapping movements or a rocking motion in their reclining armchair). Also, the subjects were asked about what was the final number after their mental subtractions. That might have created a possible element of anxiety, in which people would be worried about whether the final number (after their mental subtractions) would be a correct one. Such anxiety might have shown up in the EEG readings, which might have shown signs of anxiety that were not signs of mental effort.  Also, based on subjective whims of a human judge only about half of the data collected has been put in the public data set. The mental activity requested (serial subtraction) is a mental activity that almost seems designed to create distress and frustration in subjects, which may show up as EEG blips that are not signs of thinking. 

Data like this has no value unless there is a crystal-clear description of the exact procedure used during the rest state and the mental activity state. That description should include a precise detailing of whether the subjects had their eyes opened, an exact quotation of what they were told, an exact description of whether the subjects moved or spoke, a description of what (if any) methods were used to prevent the subjects from moving, and so forth.  Comparing mental rest states and mental activity states (from EEG data) cannot be done effectively unless the mental activity states occur under the exact sensory conditions and movement conditions of the rest state, and it would seem the only good method would be for patients to have eyes closed (without any sounds) both in the rest state and the mental activity state, without any possible source of mental anxiety in either state.  All papers based on the data set described (the "EEG During Mental Arithmetic Tasks" data set) would seem to have little value because of the failure (in the paper describing how the data was gathered) to describe an effective, well-documented protocol for distinguishing between real rest activity and sightless, soundless, motionless mental activity without any element of potential anxiety. 

I can tell you how a valid data set of EEG data might be created for the comparison of rest data and mental activity data.  People would be blindfolded in a dark silent room. They would be told that when they first hear a first electronic beep, they should remain motionless for two minutes and think of absolutely nothing other than the blackness of outer space. They would be told that when they hear the second beep, they should remain motionless and start some arithmetic activity such as adding the number 7 continually, continuing for two minutes until they hear the third beep, at which point the EEG readings will stop. The people would also be told to remain motionless and without any expression throughout the whole four minutes of testing.  They would also be told that no one will ask them what the final number was in their minds, so that there is no reason for any anxiety. They would be told, "Don't worry at all if you think one of your numbers is wrong -- just keep adding 7 to whatever was your last number was." A variety of sensitive motion detectors could be used to exclude any subjects who moved significantly. The number of subjects in the final data set would be at least 60, requiring an original pool of test subjects much greater.  Exclusion of subjects would be based on an objective criteria such as motion detector activation, rather than some arbitrary exclusion based on subjective human exclusions. Heart rate data would be gathered, and any subjects showing signs of increased heart rate during the mental activity phase (a sign of stress) would be excluded from the data set. Sensitive sound detectors would also listen for people who softly counted the numbers, excluding such subjects. Ideally, the subjects would wear mouth devices preventing any soft counting. 

Papers based on an analysis of data gathered in such a way (with a sufficient study group size) would fail to show any analysis method correctly predicting whether the rest state or the mental activity state occurred, tending to confirm the idea that thinking is not actually produced by the brain.  The accuracy of any such method over multiple tests would never be some high percentage such as 80%. 

In neuroscience papers attempting to do EEG analysis to find neural correlates of mental activity,  we tend to see some of the same problems found in papers attempting to do fMRI analysis to find neural correlates of mental activity.  The biggest problem is insufficient study group sizes.  Claims are made such that if you analyze some EEG data in such-and-such a way, you will be able to tell (with such-and-such an accuracy) whether or not mental activity occurred.  The claims are made on the basis of tiny data sets such as 8 or 10 or 12 patients. Such claims should never convince unless they are done on large data sets involving more than 50 subjects, and unless the data sets are fully documented by a discussion of a sound procedure used to gather the data sets.  Almost always what is being picked up is not signs of mental activity but signs of muscle activity, speech, vision or emotional states. 

Here are some examples of papers that we should not be taking seriously because of defects I will mention. All of these are examples of "how not to do an EEG study looking for brain wave correlates of mental activity." 

  • "What does delta band tell us about cognitive processes: A mental calculation study" (link). The study got data on only 18 subjects. The mental calculation activity required muscle movement, and the rest activity did not. So the EEG data was not gathered so that pure mental activity was compared to pure mind resting, and "neural correlates of thinking" claims are invalid. 
  • "Real-Time Mental Arithmetic Task Recognition From EEG Signals"  (link).  Data was not gathered in a way to exclude physical differences between rest states and activity and  not gathered in a way to exclude emotional differences between rest states and mental activity.  We are told, "In the relax task, subjects were asked to open their eyes and try to be relaxed. There was no mental arithmetic task to fulfill in this session. Subjects were required to breathe deeply and focus on their breath."  Then we are told in the mental activity state "subjects were required to complete arithmetic calculations as quick as possible." Any differences detected may have been due purely to differences in stress, differences in muscle activity and differences in breathing.  
  • "EEG activation patterns during the performance of tasks involving  different components of mental calculation" (link). We have no description of a data gathering method that excluded muscle activity or caused identical levels of muscle activity during the rest period and the mental calculation period.  Any differences detected may have been due purely to differences in muscle activity. 
  • "EEG microstate features according to performance on a mental arithmetic task" (link). This paper has little value because it used the "EEG During Mental Arithmetic Tasks" data set which is defective for reasons I have explained above. 
  • "Automated Classification of Mental Arithmetic Tasks Using Recurrent Neural Network and Entropy Features Obtained from Multi-Channel EEG Signals" (link). This paper has little  value because it used the "EEG During Mental Arithmetic Tasks" data set which is defective for reasons I have explained above. 
  • "Impact of mental arithmetic task on the electrical activity of the human brain" (link). This paper has little value because it used the "EEG During Mental Arithmetic Tasks" data set which is  defective for reasons I have explained above. 
  • "Mental arithmetic task detection using geometric features extraction of EEG signal based on machine learning" (link). This paper has little value because it used the "EEG During Mental Arithmetic Tasks" data set which is  defective for reasons I have explained above. 

  • "Do specific EEG frequencies indicate different processes during mental calculation? (link). The EEG data was gathered from only ten subjects, and the "rest" state involved no real rest, but looking at a visual and saying, "Nothing." The math calculation involved hard problems such as "a complex arithmetic task, e.g. (24 + 39)/9 = , to which the subject had to give the solution verbally immediately after a warning response signal was presented,"  We have no description of a data gathering method that excluded muscle activity or caused identical levels of muscle activity during the rest period and the mental calculation period.  Any differences detected may have been due purely to differences in muscle activity or differences in stress between the easy task of saying nothing and the stressful task of having to answer the hard math problem "immediately." 
  • "Mental Arithmetic Task Recognition Using Effective Connectivity and Hierarchical Feature Selection From EEG Signals" (link). EEG data was gathered from 29 subjects who we are told alternated between a short period of "mental arithmetic" and "rest." We have no indication of whether this "mental arithmetic" was silent or involved speech or muscular activity.  So we can't tell whether muscular activity was the same during the rest period and the mental activity period. 
  • "Mental arithmetic task classification with convolutional neural network based on spectral-temporal features from EEG" (link). This study used a too-small dataset made from only 12 subjects.
  • "Electroencephalographic Study of Real-Time Arithmetic Task Recognition" (link). There were only eight subjects, and a professional EEG equipment was not even used, but only a cheap consumer device.  There was also no rest state for comparison. 
  • "EEG Based Mental Arithmetic Task Classification Using a Stacked Long Short Term Memory Network for Brain-Computer Interfacing" (link). This paper has little value because it used the "EEG During Mental Arithmetic Tasks" data set which is defective for reasons I have explained above. 
  • "A Modified Multivariable Complexity Measure Algorithm and Its Application for Identifying Mental Arithmetic Task" (link). This paper has little value because it used the "EEG During Mental Arithmetic Tasks" data set which is defective for reasons I have explained above. 

The paper "Investigating neural efficiency of elite karate athletes during a mental arithmetic task using EEG" discusses a relatively good protocol for gathering data during rest and mental activity. We are told that during the rest stage subjects were told to keep their eyes closed and do nothing, and during the activity stage subjects kept their eyes closed and silently counted backward from 600, subtracting 3 each time (e.g. 597, 594, 591, and so forth).  But there were only ten subjects, and the paper does not report any great success in distinguishing rest states and activity states, with the investigators concentrating on other things.  

A scientific paper ("A test-retest resting, and cognitive state EEG dataset during multiple subject-driven states" by Yulin Wang and others) laments, "Given the various advantages of EEG including non-invasive, high temporal resolution, easy-to-operate, and cheap as a neuroimaging technique, it is surprising that there exist relatively fewer high-quality, open-access, big EEG datasets when compared to magnetic resonance imaging (MRI) datasets to enable the investigation of the brain function." Correct. In general, neuroscientists involved in EEG analysis have not done their job correctly, and have failed to create large publicly available brain wave EEG data sets using very careful methods like those I describe above, which would minimize the confounding factors of signal artifacts created by muscle movement and emotional states. 

The paper tries to help this situation by creating an EEG public dataset. The effort has some good elements,  but some shortcomings.  Data was gathered for 60 subjects during an eyes open rest state, an eyes closed rest state, and some mental activity states. We read this:

"During resting-state EEG recording, participants were instructed to view a fixation point for five minutes (Eyes Open) and then close eyes for another five minutes (Eyes Closed). They needed to keep still, quiet, and relaxed as much as they can, and try to avoid blinking for Eyes Open (EO) session and stay awake for Eyes Closed (EC) session. EEG cognitive state:  The present experiment consisted of three subject-driven cognitive states: retrieval of recent episodic memories, serial subtractions, and (silent) singing of music lyrics."

Alas, we are not told whether there was any method to exclude subjects who did not follow the instructions to "keep still, quiet and relaxed as much as they can" (methods such as motion detectors), and we do not know whether subjects failing to follow such instructions were excluded. Also, we are not told that the same instructions to "keep still, quiet and relaxed as much as they can" were given to the subjects while they were performing the cognitive tasks. So we don't know whether the levels of motion were the same when the subjects rested and when they did the cognitive tasks. But on the plus side, the number of subjects used (60) is pretty good, and there is also a good "test/retest" feature in which each subject was tested on multiple days. 

Figure 6 of the paper gives us this very interesting visual showing something called the "averaged power spectrum" for all of the 60 subjects. We have five colored lines, two of which (light blue and yellow) represent the rest states, and the other representing the mental activity states. It is interesting that all of the lines are the same, except that for the "eyes open" rest state, part of the line looks a little different. Referring to an "eyes-closed" that was a state of mental inactivity, the paper tells us "the spectrum of the four states of eyes-closed, subtraction, music, and memory are particularly similar." 

EEG rest versus activity

This is what we would expect under a "your brain does not make your mind" assumption. There is no significant brain signal difference between someone resting his mind with his eyes closed, and someone doing mental activities.  We see something similar in Figure 7 of the paper, which shows us something called the "power distribution of alpha rhythm." The Eyes Closed rest state (EC, in which people's minds were supposed to be inactive) looks the same as when the people were doing mental activity and mental recall. The last four columns on this chart all look the same, and the second column is the Eyes Closed rest state (the last two columns being memory activity and math activity). 

EEG rest versus mental activity

I find the two visuals above to be quite consistent with the claim that your brain is not the source of your mind and not the storage place of your memories. 

Monday, November 18, 2024

Neuroscience News Stories May Contain Flagrant Absurdities

The writer of a typical neuroscience news story seems to take an "anything goes" attitude in which the underlying assumption seems that any kind of boast is allowed, no matter how loony-sounding. As an example to support this claim, I offer a recent BBC news article with a headline of "Fly brain breakthrough 'huge leap' to unlock human mind."


neuroscience nonsense

                        Neuro-nonsense, BBC style

Here is an excerpt from the story:

"Now for the first time scientists researching the brain of a fly have identified the position, shape and connections of every single one of its 130,000 cells and 50 million connections. It's the most detailed analysis of the brain of an adult animal ever produced.

One leading brain specialist independent of the new research described the breakthrough as a 'huge leap' in our understanding of our own brains. One of the research leaders said it would shed new light into 'the mechanism of thought'."

The claim at the end of the quote is obviously very absurd.  Fruit flies don't think. So there is no conceivable study of the brain of the fruit fly that could shed light on what allows humans to think. 

The study of human brains has shed no light at all on how human beings are able to think, imagine, analyze and plan. Here are some relevant quotations, all quotes of scientists:

  • "Despite substantial efforts by many researchers, we still have no scientific theory of how brain activity can create, or be, conscious experience.” -- Donald D. Hoffman Department of Cognitive Sciences University of California, "Conscious Realism and the Mind-Body Problem."
  • "Little progress in solving the mystery of human cognition has been made to date." -- 2 neuroscientists, 2021 (link). 
  • " We don't know how a brain produces a thought." -- Neuroscientist Saskia De Vries (link). 
  • "You realize that neither the term ‘decision-making’ nor the term ‘attention’ actually corresponds to a thing in the brain." -- neuroscentist Paul Ciskek (link). 
  • "We know very little about the brain. We know about connections, but we don't know how information is processed." -- Neurobiologist Lu Chen
  • "Computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. Humans, on the other hand, do not — never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?" -- Senior research psychologist Robert Epstein.
  • "The neuroscientific study of creativity is stuck and lost." -- Psychologist Arne Dietrich,  "Where in the brain is creativity: a brief account of a wild-goose chase."
  • "How creative ideas arise in our mind and in our brain is a key unresolved question." -- nine scientists (link).
  • "The central dogma of Neuormania is that persons are their brains....Basic features of human experience...elude neural explanation. Indeed, they are at odds with the materialist framework presupposed in Neuromania. Many other assumptions of Neuromania -- such as that the mind-brain is a computer -- wilt on close inspection. All of this notwithstanding, the mantra 'You are your brain' is endlessly repeated. This is not justified by what little we know of the brain, or more importantly, of the relationship between our brains and ourselves as conscious agents."  -- Raymond Tallis, Professor of Geriatric Medicine, University of Manchester, "Aping Mankind," page xii (link). 
  • "And so we are forced to a conclusion opposite to the one drawn earlier: that consciousness cannot be due to activity in the brain and that cerebral activity is an inadequate explanation of mental activity."  -- Raymond Tallis, Professor of Geriatric Medicine, University of Manchester, "Brains and Minds: A Brief History of Neuromythology" (link). 
  • "My own view of a secular universe, devoid of consciousness and intelligence 'beyond the brain' (Grof 1985) gave way little by little over several decades and now seems quite absurd." John Mack MD, Harvard professor of psychology (link). 
  • "The passage from the physics of the brain to the corresponding facts of consciousness is unthinkable. Were we able even to see and feel the very molecules of the brain, and follow all their motions, all their groupings, all their electric discharges if such there be, and intimately acquainted with the corresponding states of thought and feeling, we should be as far as ever from the solution of the problem,...The chasm between the two classes of phenomena would still remain intellectually impassable."  -- Physicist John Tyndall (link).
  • "Many who work within the SMC [standard model of consciousness] assume that a nervous system is necessary and sufficient for an existential consciousness. While this is a common stance...we have yet to see a coherent defense of this proposition or a well-developed biomolecular argument for it. For most, it is simply a proclamation. Moreover, we have not seen any effort to identify what features of neural mechanisms 'create' consciousness while non-neural ones cannot. This too is simply a pronouncement." -- Four scientists, "The CBC theory and its entailments," (link).
  • "But when it comes to our actual feelings, our thought, our emotions, our consciousness, we really don't have a good answer as to how the brain helps us to have those different experiences." -- Andrew Newberg, neuroscientist, Ancient Aliens, Episode 16 of Season 14, 6:52 mark. 
  • "Dr Gregory Jefferis, of the Medical Research Council's Laboratory of Molecular Biology (LMB) in Cambridge told BBC News that currently we have no idea how the network of brain cells in each of our heads enables us to interact with each other and the world around us."  -- BBC news article (link). 
Study of the human brain has shed no light on how humans are able to think, write, plan, analyze and imagine. The idea that we might get some great insight on such a topic by studying the brains of fruit flies (that do not  think, write, plan, analyze and imagine) is absurd. 

We are given only this example of some insight from studying the fruit fly brain:

"The researchers have been able to identify separate circuits for many individual functions and show how they are connected. The wires involved with movement for example are at the base of the brain, whereas those for processing vision are towards the side. "

Yes, there is a motor cortex that helps in moving muscles, and a visual cortex that helps in vision. But scientists have known that for fifty years, and we sure didn't learn that by mapping all the neurons and connections in the brain of a fruit fly. 

No progress has been made here in understanding the human mind, contrary to the bogus headline that there occurred a "huge leap" in understanding the human mind. But don't blame the writer.  Blame the neuroscientist quoted as making the bogus claim that this "huge leap" occurred.  Who was that? We'll never know, because the story has merely said "one leading brain specialist" said such a thing, without ever identifying who that scientist was. 

The story epitomizes the BBC's tendency to uncritically parrot the most groundless and silly-sounding claims whenever they are made by scientists. BBC science journalists covering scientists act like North Korean journalists covering North Korean dictators. 


Monday, November 11, 2024

Faltering Neuroscientists Improperly Offer Computer Science Papers as Examples of Neuroscience Progress

 A recent scientific paper was entitled "The coming decade of digital brain research: A vision for neuroscience."  Consisting of little more than 100 paragraphs, the paper has more than 100 authors, reminding us of the ridiculous tendency these days for neuroscience papers to have excessive numbers of people listed as authors. We may wonder: what rule was going on here, a rule of "only one paragraph per author"?

Papers like this may remind us of the sad state of current neuroscience, in which it seems that the Supremely Important Things are not good science methodology and strict accuracy of statements but instead an author's paper count (his total of published papers) and an author's citation count (how many citations the papers have got).  So we have endless Questionable Research Practice papers following a bad methodology, often making untrue but interesting-sounding claims in their paper titles or paper abstracts, papers that typically list more than ten authors each. It is as if "quick and dirty" is the operating rule rather than "slow and clean," as if people were trying "at all costs" to increase their count of published papers and the numbers of citations such papers get, and paying relatively little attention to the quality of such papers. It is as if "quantity not quality" is the operating principle. Things get supremely ridiculous when researchers then dishonestly make claims such as "I am the author of 75 published scientific papers" when such a researcher has merely co-authored most of those papers, with the co-authorship mostly merely being a measly "decile co-authorship" in which the author is only one of ten or more listed authors. 

The paper starts out immediately by making a boastful baloney claim, the claim that "in recent years, brain research has indisputably entered a new epoch." No, the kind of brain research we are getting in the 2020's is very little different from the kind of brain research we got in the 2010's.  The authors discuss the Human Brain Project, which (despite billions in funding) failed to make any real progress in supporting the "brains make minds" claims that neuroscientists like to make, completely failing to provide evidence of a brain storage of memories. The paper authors attempt to persuade us otherwise. They make the following statement:  "To give a few examples, research in the project has led to new insights into the mechanisms of learning (Bellec et al., 2020; Cramer et al., 2020; Deperrois et  al., 2022; Göltz et  al., 2021; Jordan et al., 2021; Manninen et al., 2020; Masoli et al., 2021; Stöckl & Maass, 2021; van den Bosch et al., 2022), visuomotor control (Abadía et al., 2021; Pearson et al., 2021), vision (Chen et al., 2020; Svanera et al., 2021; van Vugt et  al., 2018), consciousness (Demertzi et  al., 2019; Lee et al., 2022), sleep (Capone et al., 2019; Le Van Quyen et  al., 2016; Rosanova et  al., 2018), spatial navigation (Bicanski & Burgess, 2018; Northoff et al., 2020; Stoianov et al., 2018; van Beest et al., 2021), predictive coding and perception (Oude Lohuis et al., 2022), as well as language (Dehaene et al., 2015) and has resulted in new theoretical concepts and analysis methods."

The claim that any of these studies provided "new insights into the mechanisms of learning" is incorrect, and neuroscientists lack any understanding of any neural mechanism of learning.  Neuroscientists give us nothing other than empty hand-waving whenever they try to speak of a brain mechanism of learning. Let's take a close look at the papers cited, to see how none of them provide any insights into a brain mechanism of learning:

  • "Bellec et al., 2020": This refers to the paper "A solution to the learning dilemma for recurrent networks of spiking neurons" here. This is a computer science paper and mathematics paper that brags about some Atari video game result produced by a software program. It is a not a paper involving any experiments with living organisms or any new observations of living organisms or their cells. 
  • "Cramer et al., 2020": This refers to the paper "Control of criticality and computation in spiking neuromorphic networks with plasticity." This is a computer science paper that talks about some result produced using an electronic hardware chip that was inaccurately described as "neuromorphic," a term presumably meaning "like a neuron." A visual of this chip shows something looking nothing like brain tissue.  This is  not a paper involving any experiments with living organisms or any new observations of living organisms or their cells. The visual below shows some misleading labels and captions used in the paper. 
  • "Deperrois et  al., 2022": This refers to the paper "Learning cortical representations through perturbed and adversarial dreaming." The paper discusses experiments done with some fancy electronic device or computer software implementation  depicted in Figure 8 of the paper. This is a not a paper involving any experiments with living organisms or any new observations of living organisms. 
  • "Göltz et  al., 2021":  this refers to the paper "Fast and energy-efficient neuromorphic deep learning with first-spike times." The paper discusses experiments done with some fancy electronic device or computer software implementation, misleadingly using the term "neurons" repeatedly for parts of such a thing that are not actually neurons. This is  not a paper involving any experiments with living organisms or any new observations of living organisms or their cells. By now we can learn the lesson that whenever you read the word "neuromorphic" in a science paper title (a term meaning "like neurons"), the paper is talking about some computer software and/or computer hardware setup rather than something actually in a brain. 
  • "Jordan et al., 2021": this refers to the paper "Evolving interpretable plasticity for spiking networks." The paper discusses experiments done with some fancy electronic device or computer software implementation. This is  not a paper involving any experiments with living organisms or any new observations of living organisms or their cells.
  • "Manninen et al., 2020": this refers to the paper "Astrocyte-mediated spike-timing-dependent longterm depression modulates synaptic properties in the developing cortex." This paper involves what it calls "in silico experiments," a term meaning experiments done with some fancy electronic device or computer software implementation. This is  not a paper involving any experiments with living organisms or any new observations of living organisms or their cells. 
  • "Masoli et al., 2021": this refers to the paper "Cerebellar golgi cell models predict dendritic processing and mechanisms of synaptic plasticity." The paper discusses experiments done with some fancy electronic device or computer software implementation. This is  not a paper involving any experiments with living organisms or any new observations of living organisms or their cells.

  • "Stöckl & Maass, 2021": this refers to the paper "Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes." This is  not a paper involving any experiments with living organisms or any new observations of living organisms or their cells.
  • "van den Bosch et al., 2022": this refers to the paper "Striatal dopamine dissociates methylphenidate effects on value-based versus surprise based reversal learning." Unlike all the papers discussed above, this paper actually involved some experiments with living organisms: some humans who were given two drugs. The experiments report a slight improvement in performance on those given one of the drugs, but only a very slight performance. Supplementary Table 1 in the Supplementary Information  shows those given a placebo scored .90, that those given one drug scored .94 and those given another drug scored .88.  The results are not very impressive, and do not constitute anything like "new insights into the mechanism of learning."
So I have now discussed all of the papers that were claimed to have provided "new insights into the mechanism of learning." by the recent paper "The coming decade of digital brain research: A vision for neuroscience" with 100+ authors. We have been seriously misled. The paper has boasted that the Human Brain Project produced  "new insights into the mechanism of learning," and has given the papers above as its evidence. None of the papers provides any evidence of any such thing as a neural mechanism of learning. All of the papers except one are papers that were mere computer science papers, and not a paper involving any experiments with living organisms or any new observations of living organisms or their cells. Repeatedly in these papers there occurs the deception of referring to purely computer software components or electronic components, and misleadingly calling them "neurons," "synapses," and "dendrites." An example of such a deception is shown below, which is a visual from one of the papers mentioned above:

 

misleading labels in a science paper

The paper "The coming decade of digital brain research: A vision for neuroscience" with 100+ authors has no description of any progress made in explaining how brains could produce learning, memory, consciousness, self-hood or thinking. The paper fails to use the word "engram." The paper offers some lame excuses for the lack of progress in these areas. It states this: "Our current understanding of the mechanistic operations which subserve cognitive functions, such as memory or decision making, is limited by the scale and precision of existing technologies—simultaneous microscopic recordings are limited to a few brain regions, while full-brain imaging lacks the spatial and/or temporal resolution needed." There are no "mechanistic operations which subserve cognitive functions, such as memory or decision making." The claim that there are such things is a groundless myth of the belief community of neuroscientists.  The excuse given is one that sounds as phony as a three-dollar bill. For decades microscopes have had a power which should have been sufficient to discover memories stored in brains, if they existed.  

Below is a diagram from the paper "Materials Advances Through Aberration-Corrected Electron Microscopy." We see that since the time the genetic code was discovered about 1953, microscopes have grown very many times more powerful. The A on the left stands for an angstrom, a tenth of a nanometer (that is, a ten-billionth of a meter). 


Currently the most powerful microscopes can see things about 1 angstrom in width, which is a tenth of a nanometer. How does this compare to the sizes of the smallest units in brains? Those sizes are below:

Width of a neuron body (soma): about 100 microns (micrometers), which is about 1,000,000 angstroms.

Width of a synapse: about 20-30 nanometers, about 200-300 angstroms.  

Width of a dendritic spine: about 50 to 200 nanometers, about 500 to 2000 angstroms.

Clearly the resolution of the most powerful microscopes is powerful enough to read memories stored in neurons or synapses, if such memories existed. And more than 14,000 brains have been microscopically studied in recent years. The failure to microscopically read any  memories from human brain tissue is a major reason for thinking that brains do not store human memories.  

scientists ignoring evidence

Besides failing to find specific memories and items of learned knowledge by microscopically examining brains (such as the information that the New York Yankees belong to the American League of US baseball), scientists can find no evidence of a mechanism for storing learned information in brains.  If such a mechanism existed, its fingerprints would be all over the place. Since humans can learn and remember so many different types of things (sights, sounds, feelings, facts, beliefs, opinions, numbers, smells, tastes, physical pains, physical pleasures, music, quotations, and so forth), any brain mechanism for storing all of these things would have a massive footprint in the brain and in the genome. No sign of any such thing can be found. The workhorses that get things done in the body are proteins, and humans have more than 20,000 types of proteins. No one has ever identified a protein that helps to write a memory of experiences or numbers or words to the brain or neural tissue, in any kind of way that helps explain how memories or knowledge could be stored in brains.  Of course, you can find studies maybe showing that protein XYZ was used when someone learned something, but that does nothing to show a mechanism of memory storage. 

The paper "The coming decade of digital brain research: A vision for neuroscience" with 100+ authors fails to seriously discuss the gigantic rot at the core of today's neuroscience: the massive production of irreproducible results caused by countless experimenters doing "quick and dirty" research following Questionable Research Practices such as inadequate sample sizes, lack of blinding protocols, and the use of poor measurement techniques such as judgments of freezing behavior.

The paper makes this laughable statement: "Brain simulation is expected to play a key role in elucidating essential aspects of brain processes (by demonstrating the capacity to reproduce them in silico),
such as decision-making, sensorimotor integration, memory formation, etc." What silly drivel this is. Computers are computers, and brains are brains. You do nothing to show that brains produce decisions or that brains form memories by showing that computers can make decisions or that computers can store new information in computer memory systems. 

It is rather clear what is going on. Our neuroscientists are getting nowhere trying to show that brains can produce thinking or learning or store memories or retrieve memories (things that brains do not actually do).  To try to cover up their lack of progress, neuroscientists are appealing to computer science papers that merely show examples of computers storing, retrieving and processing data. It's kind of like a husband who is failing to earn a salary, and bragging to his wife about all the virtual money he is making in some life-simulation game such as The Sims

typical bad neuroscience paper
Click to see left column more clearly

typical science news story
Click to see left column more clearly

Monday, November 4, 2024

They Memorized Many Times Faster Than a Brain Could Ever Do

 The main theory of a brain storage of memories is that people acquire new memories through a strengthening of synapses. There are many reasons for disbelieving this claim. One is that information is generally stored through a writing process, not a strengthening process. It seems that there has never been a verified case of any information being stored through a mere process of strengthening. Another reason for rejecting the claim is that human memories can last 1000 times longer than the average lifetime of proteins in the brain. A scientific paper states, "Recent studies have revealed that most proteins, including synaptic proteins, have half-lives that range between 5 and 7 days (Cohen et al., 2013, Dörrbaum et al., 2018)."  The average lifetime of a protein is about twice its half-life. 

The 2018 paper here is entitled "Brain tissue plasticity: protein synthesis rates of the human brain." It tells us the astonishing fact that proteins in the human brain are replaced at a rate of 3% to 4% per day. We read this:

"Where skeletal muscle tissue has been shown to turnover at a rate of 1–2% per day, here we show that brain tissue turns over much faster at a rate of 3–4% per day. This would imply complete renewal of brain tissue proteins well within 4–5 weeks. From a physiological viewpoint this is astounding, as it provides us with a much greater framework for the capacity of brain tissue to recondition. Moreover, from a philosophical perspective these observations are even more surprising. If rapid protein turnover of brain tissue implies that all organic material is renewed, then all data internalized in that tissue are also prone to renewal. These findings spark (even) more debate on the interpretation and (long-term) storage of data in neural matter, the capacity of humans to consciously or unconsciously process data, and the (organic) basis of our own personality and ego. All of this becomes quite remarkable in light of such rapid protein turnover rates of the human brain."

Such rapid replacement of brain proteins is utterly inconsistent with claims that brains store old memories of what someone learned decades ago. A person like me remembers many things he learned 50 years ago, but if my brain was storing my memories, I would not be able to remember back more than a few months, given a 3% per day replacement of brain proteins.  A 2022 scientific paper confesses this:

"Conclusive evidence that specific long-term memory formation relies on dendritic growth and structural synaptic changes has proven elusive. Connectionist models of memory based on this hypothesis are confronted with the so-called plasticity stability dilemma or catastrophic interference. Other fundamental limitations of these models are the feature binding problem, the speed of learning, the capacity of the memory, the localisation in time of an event and the problem of spatio-temporal pattern generation."

If it were true that memories were stored by a strengthening of synapses, this would be a slow process. The only way in which a synapse can be strengthened is if proteins are added to it. We know that the synthesis of new proteins is a rather slow effect, requiring many minutes of time. In addition, there would have to be some very complicated encoding going on if a memory was to be stored in synapses. The reality of newly-learned knowledge and new experience would somehow have to be encoded or translated into some brain state that would store this information. When we add up the time needed for this protein synthesis and the time needed for this encoding, we find that the theory of memory storage in brain synapses predicts that the acquisition of new memories should be a very slow affair, which can occur at only a tiny bandwidth, a speed which is like a mere trickle. Do a Google image search for "speed of protein synthesis" and you will see charts that look like this (with data point dots scattered across the lines):



Don't make the mistake of thinking that a brain storage of new memories would occur as quickly as the speed of protein synthesis.  Such a storage would rely on three things that would be slow:

(1) Protein synthesis itself, which would require an average of multiple minutes. 
(2) Much additional time required for some act by which sensory information was encoded in some never-discovered storage format allowing sensory information to be translated into brain states or synapse states. 
(3) The time needed for signal transmission to occur across various parts of the brain, which would be quite an additional slowing factor, because of the relatively slow speed of transmission across synapses and dendrites, illustrated by the diagram below:

slow speed of brain signals

The synaptic gaps of chemical synapses and relatively slow dendrites (speed bumps for the brain) vastly outnumber myelinated axons, meaning for the brain the slowing parts vastly outnumber the fast parts. 

Memory contests show that some humans can actually acquire new memories at a speed very many times greater than the slow speed that would occur if brains were storing memories by protein synthesis. For example, according to a page on the site of the Guinness Book of World Records, "The fastest time to memorize and recall a deck of playing cards is 13.96 seconds, achieved by Zou Lujian (China) at the 2017 World Memory Championships held in Shenzhen, Guangdong Province, China, on 6-8 December 2017." Memorization speeds this fast utterly discredit claims that learning occurs by synapse strengthening, which would require the synthesis of new proteins, something which would require multiple minutes. 

The page here on www.wikipedia.org describes a competition called the World Memory Championships, which has the website here.  There are various different competitions, which are described in Chapter 7 (page 57) of the handbook you can read here:

Discipline 1:  a competition to memorize as many abstract images as possible, given 15 minutes to memorize, and 30 minutes to recall.  (Page 58.) 
Discipline 2:  a competition to memorize as many binary numbers as possible  given 5 minutes to memorize, and 15 minutes to recall (national level), or 30 minutes to memorize, and 60 minutes to recall (international level).  (Page 62.)
Discipline 3:  a competition to memorize as many random decimal digits ( such as 8, 9, and 2) as possible, given 15 minutes to memorize, and 30 minutes to recall (national level), or 30 minutes to memorize, and 60 minutes to recall (international level), or   60 minutes to memorize, and 120 minutes to recall (world  level). (Page 67.)
Discipline 4 a competition to memorize as many name and face combinations as possible, given 5 minutes to memorize, and 15 minutes to recall (national level), or 15 minutes to memorize, and 30 minutes to recall (international level or world level).  (Page 70.) Competitors are asked to provide names when shown a face. 
Discipline 5:  a "Speed Numbers" competition to memorize as many random decimal digits ( such as 8, 9, and 2) as possible, given 5 minutes to memorize, and 15 minutes to recall. (Page 75.)
Discipline 6:  a competition to memorize as many pairs of dates and fictional events as possible, given 5 minutes to memorize, and 15 minutes to recall. (Page 80.)
Discipline 7:  a competition to memorize as many separate packs of shuffled playing cards as possible, given 10 minutes to memorize, and 30 minutes to recall (national level), or 30 minutes to memorize, and 60 minutes to recall (international level), or   60 minutes to memorize, and 120 minutes to recall (world  level). (Page 82.)
Discipline 8 a competition to memorize as many random words as possible, given 5 minutes to memorize, and 15 minutes to recall (national level), or 15 minutes to memorize, and 30 minutes to recall (international level or world level).  (Page 87.) 
Discipline 9 a 'Spoken Numbers" competition to memorize as many spoken numbers as possible, with the numbers being read at a rate of one number per second. (Page 92, complicated rules.)
Discipline 10 a "Speed Cards" competition to commit to memory as many cards as possible, given 5 minutes or less for memorization, and only 5 minutes for recall.  (Page 98.)

Below is performance data recorded on the site and on the wikipedia.org page here.

Discipline 1, abstract images:  Two competitors in 2021 (Huang Jinyao and Xu Yangran) were able to memorize more than 1000 abstract images in only 15 minutes (or score more than 1000 points on such a competition, indicating similar ability).
Discipline 2, binary digits: Four competitors in 2021 were able to recall more than 600 binary digits memorized in a 30-minute period. Ryu Song I was able to recall 7485 binary numbers memorized in a 30-minute period (WMSC World Championship 2019).
Discipline 3, random decimal digits: Ryu Song I was able to recall 4620 decimal digits  memorized in an hour-long  period (WMSC World Championship 2019). Seven competitors in 2021 were able to recall more than 600 binary digits memorized in a 30-minute period. 
 Discipline 4, face and name combinations:  Katie Kermode was able to recall the names of 224 previously unseen people from their images, having had only 15 minutes to memorize their names (IAM World Championship 2018). Similarly, the scientific paper here says someone identified as SM1 "memorized 215 German names to the corresponding faces within 15 minutes at the Memoriad in 2015 in Istanbul." (The paper stated that the super-memorizers it studied did not have increased hippocampal volumes.) Several Mongolian or Chinese contestants were able to recall the names of more than 600 previously unseen people from their images, having had only 15 minutes to memorize their names (2021 World Memory Championships). 
Discipline 5, Speed Numbers: Wei Quinru was able to recall 642 digits memorized in a 5-minute period (Korea Open Memory Championship 2024). Four  people were able to each recall more than 800 digits memorized in a 5-minute period (2021 World Memory Championships).
Discipline 6, Dates and Fictional Events:  Prateek Yadav memorized in 5 minutes dates corresponding to 154 fictional events (2019).  Several other contestants memorized in 5 minutes dates corresponding to 700+ fictional events (2021 World Memory Championships).
Discipline 7, "Hour Cards" Card Memorization:  Kim Su Rim memorized 2530 cards in 60 minutes.  
Discipline 8, Random Words:  Prateek Yadev memorized 335 random words in 15 minutes. Several others in 2021 memorized more than 500 random words in 15 minutes. 
Discipline 9, "Spoken Numbers":  Ryu Song I was able to recall 547 decimal digits that had been read at a rate of one per second (WMSC World Championship 2019).  Tenuun Tamir and several other Mongolian or Chinese contestants were able to recall more than 600 decimal digits that had been read to him at a rate of one per second. 
Discipline 10, "Speed Cards":  Munkhshur NARMANDAK memorized 981 cards in five minutes, and several others memorized more than 600 cards in five minutes. 

Below from the World Memory Championships site is a table showing some of the best performers (link).

fastest memorizers

What we have in the performance records above is what can be roughly describing as lightning-fast memorization ability. Such an ability has been demonstrated by many subjects, doing many different types of memorization. The performances listed above are many times faster than any conceivable result that could be produced if memories are stored in brains.  There does not exist any detailed credible theory that can explain fast memorization by neural or synaptic processes. When neuroscientists say something about how memories form, they typically engage in hand-waving that vaguely refers to processes that are known to be very slow, such as synaptic strengthening. 

Routinely displaying instant recall abilities utterly unaccountable by the activity of brains completely lacking in addresses, sorting or indexes (the things that make fast retrieval possible in computers), humans do not recall at the speed of brains. Humans recall at the speed of souls. And the fastest memorizers do not memorize at the speed of brains. Such memorizers memorize at the speed of souls. 

For other posts documenting the ability of some humans to memorize at a blazing fast speed, see my posts with a tag of "photographic memory" or "eidetic memory."  On page 29 of the nineteenth century book here, we have an interesting account of photographic memory obtained under hypnosis (with it apparently progressing to become photographic memorization that could occur outside of hypnosis). The author states that eventually outside of hypnosis "the duration of a single second or a mere
glimpse at the page was sufficient for the pupils to retain in their memory the whole contents of it."

neuroscientist hand waving