Saturday, March 21, 2020

Exhibit A Suggesting Scientists Don't Understand How a Brain Could Store a Memory

Many a scientist claims that human memories are stored in brains. But when asked to explain how it is that a brain could retrieve a memory, scientists go "round and round" in circles, producing unsubstantial circumlocutions that fail to provide any confidence that scientists understand such a thing.  I discussed such explanatory shortfalls in my 2019 post “Exhibit A Suggesting Scientists Don't Know How a Brain Could Retrieve a Memory,” and my 2020 post  "Exhibit B Suggesting Scientists Don't Know How a Brain Could Retrieve a Memory."  When they attempt to explain how a brain could store a memory, scientists give the same kind of unsubstantial and empty discussion, the kind of discussion that should fail to convince anyone that they have a real understanding of how a brain could do such a thing. 

An example of such a thing is an article that appeared in the online site of the major British newspaper The Guardian. The article by neuroscientist Dean Burnett was entitled "What happens in your brain when you make a memory?" Burnett follows a kind of standard formula followed by writers on this topic.  The rules are rather as follows:

(1) Attempt to persuade readers that you understand memory by talking about the difference between long-term memory and short-term memory.   Whenever such discussion occurs, it actually does nothing showing any understanding of a neural basis for memory, for such a discussion can occur based on purely phenomenal observations about how well people perform on different memory tasks. 

(2) Attempt to persuade readers that you understand memory by talking about the difference between episodic memory and conceptual memory. This again is something that can be discussed without any reference to the brain, so any such discussion doesn't do anything to establish some understanding of a neural basis for memory. 
(3) Make frequent use of the word "encoding," without actually presenting any theory of encoding.   Neuroscientists love to use the word "encoding" when discussing memory acquistion, as if they had some understanding of some system of encoding or translation by which episodic or conceptual memories could be translated into neural states or synapse states.  They do not have any such understanding. No neuroscientist has ever presented a credible, coherent, detailed theory of memory encoding, of how conceptual knowledge or episodic experiences could ever be translated into neural states or synapse states.  Any attempt to do such thing would cause you to become entangled in an ocean of difficulties.  
(4) Mention one or two parts of the brain, usually exaggerating their significance.  I'll give an example of this in a moment. 
(5) Talk dogmatically about synapses, creating the impression that memories are stored in them, without discussing their enormous instability and unsuitability as a place for storing memories that might last for decades. 

 Burnett pretty much follows such a customary set of rules. He uses the word "encoding" or "encode" four times, but fails to present any substantive explanation or idea as to how any human episodic or conceptual information could ever be encoded, in the sense of being translated into neural states. Burnett claims,  "The hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses." He provides no evidence to back up this claim. There are some important reasons for thinking that the claim cannot possibly be correct. 

One reason is that studies have shown that people with very heavy hippocampus damage can have a normal ability to acquire conceptual and learned information.  The paper here discussed three subjects who had lost about half of the matter in their hippocampi.  We read the following:

"All three patients are not only competent in speech and language but have learned to read, write, and spell. ...With regard to the acquisition of factual knowledge, which is another hallmark of semantic memory, the vocabulary, information, and comprehension subtests of the VIQ scale are among the best indices available, and here, too, all three patients obtained scores within the normal range (Table 2). A remarkable feature of Beth’s and Jon’s stores of semantic memories is that they were accumulated after these patients had incurred the damage to their hippocampi."

The same thing was found by the study here. A group of 18 subjects were studied, subjects with severe hippocampus damage. Some 28% to 62% of the hippocampi of these subjects were damaged or destroyed. The subjects had episodic memory problems, but "relatively preserved intelligence, language abilities, and academic attainments."  We are told, "In all but one of our cases, the patients...attended mainstream schools."  Could patients with such heavy hippocampus damage have normal academic achievements if it were true that "the hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses"?  Not at all. In a similar vein, the study here involving 17 rhesus monkeys found that "monkeys with hippocampal lesions showed no deficits in learning and later recognizing new scenes."

A study looked at memory performance in 140 patients who had undergone an operation called an amygdalohippocampectomy, which removes both the hippocampus and the amygdala. Table 1 of the study found that such an operation had no significant effect on nonverbal memory, causing a difference of less than 3%. Table 3 shows that most patients were unchanged in their verbal memory and nonverbal memory. More patients had a loss in memory than a gain, although about 13% had a gain in nonverbal memory. These results are not at all consistent with Burnett's claim that "the hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses."

There is a reason why it cannot be true that a new memory requires a formation of new synapses. The reason is that humans can form new memories instantly, but both the formation of a new synapse and the strengthening of a synapse require minutes of time.  If someone fires a bullet that passes near your head, you will instantly form a permanent new memory that you will remember the rest of your life.  The same thing will happen the moment you break your leg in a biking accident.  Claiming that memories require either the formation of new synapses or the strengthening of synapses is incompatible with a fact of human experience, that humans can form new memories instantly. 

If it were true that memories were stored by a strengthening of synapses, this would be a slow process. The only way in which a synapse can be strengthened is if proteins are added to it. We know that the synthesis of new proteins is a rather slow effect, requiring minutes of time. In addition, there would have to be some very complicated encoding going on if a memory was to be stored in synapses. The reality of newly-learned knowledge and new experience would somehow have to be encoded or translated into some brain state that would store this information. When we add up the time needed for this protein synthesis and the time needed for this encoding, we find that the theory of memory storage in brain synapses predicts that the acquisition of new memories should be a very slow affair, which can occur at only a tiny bandwidth, a speed which is like a mere trickle. But experiments show that we can actually acquire new memories at a speed more than 1000 times greater than such a tiny trickle.

One such experiment is the experiment described in the scientific paper “Visual long-term memory has a massive storage capacity for object details.” The experimenters showed some subjects 2500 images over the course of five and a half hours, and the subjects viewed each image for only three seconds. Then the subjects were tested in the following way described by the paper:

"Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high  (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images."

In this experiment, pairs like those shown below were used. A subject might be presented for 3 seconds with one of the two images in the pair, and then hours later be shown both images in the pair, and be asked which of the two was the one he saw.

Although the authors probably did not intend for their experiment to be any such thing, their experiment is a great experiment to disprove the prevailing dogma about memory storage in the brain. Let us imagine that memories were being stored in the brain by a process of synapse strengthening. Each time a memory was stored, it would involve the synthesis of new proteins (requiring minutes), and also the additional time (presumably requiring additional minutes) for an encoding effect in which knowledge or experienced was translated into neural states. If the brain stored memories in such a way, it could not possibly keep up with remembering most of thousands of images that appeared for only three seconds each.

There is another reason why it cannot be true that we remember things because "the hippocampus links all of the relevant information together and encodes it into a new memory by forming new synapses," as Burnett claims.  The reason is that synapses are too unstable to be a storage place for memories that can last for decades.  The proteins in synapses have short lifetimes, lasting for an average of no more than about two weeks.

A fairly recent paper on the lifetime of synapse proteins is the June 2018 paper “Local and global influences on protein turnover in neurons and glia.” The paper starts out by noting that one earlier 2010 study found that the average half-life of brain proteins was about 9 days, and that a 2013 study found that the average half-life of brain proteins was about 5 days. The study then notes in Figure 3 that the average half-life of a synapse protein is only about 5 days, and that all of the main types of brain proteins (such as nucleus, mitochondrion, etc.) have half-lives of less than 20 days. The synapses themselves do not last for more than a few years.  So synapses lack the stability that would have to exist if memories are to be stored for years.  Humans can reliably remember things for more than 50 years. Such a length of time is about 1000 times longer than the lifetime of proteins in synapses. 

Without providing any evidence for such a claim, Burnett teaches the widely taught idea that memories migrate from one part of the brain to another. He states the following:

"Newer memories, once consolidated, appear to reside in the hippocampus for a while. But as more memories are formed, the neurons that represent a specific memory migrate further into the cortex."

We have no understanding of how a neuron could represent a memory, no evidence that memories are written to any part of the brain, and no understanding of how any such thing as a writing of a memory could occur in neurons and synapses. We also have zero understanding of how a written memory could migrate from one place in a brain to another place, nor do we have any direct evidence that any such migration occurs.  But we do have an extremely strong reason for thinking that accurate memories could not possibly migrate from a hippocampus into the cortex. The reason has to do with the very low reliability of signal transmission in the cortex. 

A scientific paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower." 

Another paper concurs by also saying that there are two problems (unreliable synaptic transmission and a randomness in the signal strength when the transmission occurs):

"On average most synapses respond to only less than half of the presynaptic spikes, and if they respond, the amplitude of the postsynaptic current varies. This high degree of unreliability has been puzzling as it impairs information transmission."

So the transmission of information into the cortex must be extremely unreliable.  To imagine how unreliable such a transmission would be, with only a 10% chance of a nerve signal transmitting, imagine that you are trying to send an email to someone, but your email provider is so unreliable that there is only a 10% chance that any character that you type will be accurately transmitted.  You might send your friend an email saying, "Hi Joe, what do you say we have dinner at that new steak place that opened on 42nd Street?"   But the email your friend got would be unreadable gibberish, something like "Hwdsd ondSt?"  That's the type of information scrambling that would occur if memories were to migrate from the hippocampus into the cortex, given a cortex where there is only a 10% chance of any action potential (or nerve signal) transmitting.  

So if memories were migrating into our cortex, we would never be able to remember things accurately.  But humans have an astonishing capability for memorizing vast amounts of information with 100% accuracy. It is a fact that some Muslims accurately memorize every word of their holy book.  We also know that actors can accurately memorize each of the 1569 lines of the role of Hamlet, and that Wagnerian tenors can accurately memorize both the notes and the words of the extremely long parts of Siegfried and Tristan (the role of Siegried requires someone to sing on stage for most of four hours). 

Once we carefully ponder all the reasons for rejecting its main claims, and also carefully ponder the lack of any discussion of robust evidence for a brain storage of memory,  we can see that an article such as the Guardian article is a kind of Exhibit A that modern neuroscientists have no real understanding of how a brain could do any such thing as store a memory.  Nature never told us that brains store memories. It was merely neuroscientists who made such a claim, without good evidence. 

The article by Burnett is not a detailed scientific paper, but if we look at a typical scientific paper attempting to present evidence for memory storage in a brain, you will not find any robust evidence. A recent example is the 2019 paper "Changes of Synaptic Structures Associated with Learning, Memory and Diseases."  The paper fails to provide any solid evidence that synapse states have any causal relation with memory acquisition.  No clear message comes from findings such as "motor learning rapidly increases the formation and elimination of spines of L5 PyrNs in the mouse  primary motor cortex (M1), leading to a transient increase in spine number, which over days returns to the baseline," combined with other statements such as "another study showed that spine dynamics on L2/3 PyrNs are not affected by motor learning."  Anyone looking to find a relation between one effect and some other physical factor (in a small number of tries) will have perhaps a 25% chance of finding what looks like a correlation purely by chance.  For example, if I try to look for a relation between stock market declines and rainfall, I'll have perhaps a 25% chance of finding such an effect if I test on four random days. So we would expect that neuroscientists hoping to find some correlation between synapse activity and learning would find such a correlation in a certain fraction of the times they tried, purely by chance, even if synapses are not a storage place of learned information. Nowhere in this paper is there anything like an explanation of how a brain could store a memory, and when the paper authors confess that "the stability of memory and the dynamism of synapses remain to be reconciled," they basically admit that they have no answer to the objection that synapses are too unstable to be storing memories that last for decades. 

Friday, February 21, 2020

Fraud and Misconduct Are Not Very Rare in Biology

Without getting into the topic of outright fraud, we know of many common problems that afflict a sizable percentage of scientific papers. One is that it has become quite common for scientists to use titles for their papers announcing results or causal claims that are not actually justfied by any data in the papers. A scientific study found that 48% of scientific papers use "spin" in their abstracts. Another problem is that scientists may change their hypothesis after starting to gather data, a methodological sin that is called HARKing, which stands for Hypothesizing After Results are Known. An additional problem is that given a body of data that can be analyzed in very many ways, scientists may simply experiment with different methods of data analysis until one produces the result they are looking for. Still another problem is that scientists may use various techniques to adjust the data they collect, such as stopping data collection once they found some statistical result they are looking for, or arbitrarily excluding data points that create problems for whatever claim they are trying to show.  Then there is the fact that scientific papers are very often a mixture of observation and speculation, without the authors making clear which part is speculation.  Then there is the fact that through the use of heavy jargon, scientists can make the most groundless and fanciful speculation sound as if was something strongly rooted in fact, when it is no such thing. Then there is the fact that scientific research is often statistically underpowered, and very often involves sample sizes too small to be justifying any confidence in the results. 

All of these are lesser sins. But what about the far more egregious sin of outright researcher misconduct or fraud?  The scientists Bik, Casadevall and Fang attempted to find evidence of such misconduct by looking for problematic images in biology papers.  We can imagine various ways in which a scientific paper might have a problematic image or graph indicating researcher misconduct:

(1) A photo in a particular paper might be duplicated in a way that should not occur.  For example, if a paper is showing two different cells or cell groups in two different photos,  those two photos should not look absolutely identical, with exactly the same pixels. Similarly, brain scans of two different subjects should not look absolutely identical, nor should photos of two different research animals. 
(2) A photo in a particular paper that should be different from some other photo in that paper might be simply the first photo with one or more minor differences (comparable to submitting a photo of your sister, adjusted to have gray hair, and labeled as a photo of your mother). 
(3) A photo in a particular paper that should be original to that paper might be simply a duplicate of some photo that appeared in some previous paper by some other author, or a duplicate with minor changes.
(4) A photo in a particular paper might show evidence of being Photoshopped.  For example, there might be 10 areas of the photo that are exact copies of each other, with all the pixels being exactly the same. 
(5) A graph or diagram in a paper that should be original to that paper might be simply a duplicate of some graph or diagram that appeared in some previous paper by some other author. 
(6) A graph might have evidence of artificial manipulation, indicating it did not naturally arise from graphing software. For example, one of the bars on a bar graph might not be all the same color. 

research misconduct

There are quite a few other possibilites by which researcher misconduct could be identified by examining images, graphs or figures. Bik, Casadevall and Fang made an effort to find such problematic figures. In their paper "The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications," they report a large-scale problem.  They conclude, "The results demonstrate that problematic images are disturbingly common in the biomedical literature and may be found in approximately 1 out of every 25 published articles containing photographic image data."  

But there is a reason for thinking that the real percentage of research papers with problematic images or graphs is far greater than this figure of only 4%.  The reason is that the techniques used by Bik, Casadevall and Fang seem like rather inefficient techniques capable of finding only a fraction of the papers with problematic images or graphs.  They describe their technique as follows (20,621 papers were checked): 

"Figure panels containing line art, such as bar graphs or line graphs, were not included in the study. Images within the same paper were visually inspected for inappropriate duplications, repositioning, or possible manipulation (e.g., duplications of bands within the same blot). All papers were initially screened by one of the authors (E.M.B.). If a possible problematic image or set of images was detected, figures were further examined for evidence of image duplication or manipulation by using the Adjust Color tool in Preview software on an Apple iMac computer. No additional special imaging software was used. Supplementary figures were not part of the initial search but were examined in papers in which problems were found in images in the primary manuscript."

This seems like a rather inefficient technique which would find less than half of the evidence for researcher misconduct that might be present in photos, diagrams and graphs. For one thing, the technique ignored graphs and diagrams. Probably one of the biggest possibilites of misconduct is researchers creating artificially manipulated graphs not naturally arising from graphing software, or researchers simply stealing graphs from other scientific papers. For another thing, the technique used would only find cases in which a single paper showed evidence for image shenanigans. The technique would do nothing to find cases in which one paper was inappropriately using an image or graph that came from some other paper by different authors. Also, the technique ignored supplemental figures (unless a problem was found in the main figures). Such supplemental figures are often a signficant fraction of the total number of images and graphs in a scientific paper, and are often referenced in the text of a paper as supporting evidence. So they should receive the same scrutiny as the other images or figures in a paper. 

I can imagine a far more efficient technique for looking for misconduct related to imagery and graphs. Every photo, every diagram, every figure and every graph in every paper in  a very large set of papers on a topic (including supplemental figures) would be put into a database.  A computer program with access to that database would then run through all the images, looking for duplicates or near-duplicates in the images, as well as other evidence of researcher misconduct. Such a program might also make use of "reverse image search" capabilities available online.  Such a computer program crunching the image data could be combined with manual checks.  Such a technique would probably find twice as many problems.  Because the technique for detecting problematic images described by  Bik, Casadevall and Fang is a rather inefficient technique skipping half or more of its potential targets, we have reason to suspect that they have merely shown us the tip of the iceberg, and that the actual rate of problematic images and graphs (suggesting researcher misconduct) in biology papers is much greater than 4% -- perhaps 8% or 10%. 

A later paper ("Analysis and Correction of Inappropriate Image Duplication: the Molecular and Cellular Biology Experience") by Bik, Casadevall and Fang (along with Davis and Kullas) involved analysis of a different set of papers. The paper concluded that "as many as 35,000 papers in the literature are candidates for retraction due to inappropriate image duplication."  They found that 6% of the papers "contained inappropriately duplicated images." They reached this conclusion after examining a set of papers in the journal Molecular and Cellular Biology.  To reach this conclusion, they used the same rather inefficient method of their previous study I just cited. They state, "Papers were scanned using the same procedure as used in our prior study."  We can only wonder how many biology papers would be found to be "candidates for retraction" if a really efficient (partially computerized) method was used to search for the image problems, one using an image database and reverse image searching, and one checking not only photos but also graphs, and one also checking the supplemental figures in the papers.  Such a technique might easily find that 100,000 or more biology papers were candidates for retraction.

We should not be terribly surprised by such a situation. In modern academia there is relentless pressure for scientists to grind out papers at a high rate. There also seems to be relatively few quality checks on the papers submitted to scientific journals. Peer review serves largely as an ideological filter, to prevent the publication of papers that conflict with the cherished dogmas of the majority. There are no spot checks of papers submitted for publication, in which reviewers ask to see the source data or original lab notes or lab photographs produced in experiments.  The problematic papers found by the studies mentioned above managed to pass peer review despite glaring duplication errors, indicating that peer reviewers are not making much of an attempt to exclude fraud.  Given this misconduct problem and the items mentioned in my first paragraph, and given the frequently careless speech of so many biologists, in which they so often speak as if unproven claims or discredited claims are facts, it seems there is a significant credibility problem in academic biology. 

In an unsparing essay entitled "The Intellectual and Moral Decline in Academic Research," PhD Edward Archer states the following:

"My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields. Yet, more importantly, training in 'science' is now tantamount to grant-writing and learning how to obtain funding. Organized skepticism, critical thinking, and methodological rigor, if present at all, are afterthoughts....American universities often produce corrupt, incompetent, or scientifically meaningless research that endangers the public, confounds public policy, and diminishes our nation’s preparedness to meet future challenges....Universities and federal funding agencies lack accountability and often ignore fraud and misconduct. There are numerous examples in which universities refused to hold their faculty accountable until elected officials intervened, and even when found guilty, faculty researchers continued to receive tens of millions of taxpayers’ dollars. Those facts are an open secret: When anonymously surveyed, over 14 percent of researchers report that their colleagues commit fraud and 72 percent report other questionable practices....Retractions, misconduct, and harassment are only part of the decline. Incompetence is another....The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor. That failure means taxpayers are being misled by results that are non-reproducible or demonstrably false."

Justin T. Pickettt PhD has written a long illuminating post entitled "How Universities Cover Up Scientific Fraud." It states the following:

"I learned a hard lesson last year, after blowing the whistle on my coauthor, mentor and friend: not all universities can be trusted to investigate accusations of fraud, or even to follow their own misconduct policies. Then I found out how widespread the problem is: experts have been sounding the alarm for over thirty years. One in fifty scientists fakes research by fabricating or falsifying data....Claims that universities cover up fraud and even retaliate against whistleblowers are common....More than three decades ago, after spending years at the National Institutes of Health studying scientific fraud, Walter Stewart came to a similar conclusion. His research showed that fraud is widespread in science, that universities aren’t sympathetic to whistleblowers and that those who report fraudsters can expect only one thing: 'no matter what happens, apart from a miracle, nothing will happen.' ”

An Editor-in-Chief of the journal Molecular Brain has found evidence suggesting that a significant fraction of neuroscientists may not have the raw data backing up the claims in their scientific papers. He states the following:

"As an Editor-in-Chief of Molecular Brain, I have handled 180 manuscripts since early 2017 and  have made 41 editorial decisions categorized as 'Revise before review,' requesting that the authors provide raw data. Surprisingly, among those 41 manuscripts, 21 were withdrawn without providing raw data, indicating that requiring  raw data drove away more than half of the manuscripts. I rejected 19 out of the remaining 20 manuscripts because of insufficient raw data. Thus, more than 97% of the 41 manuscripts did not present the raw data supporting their results when requested by an editor, suggesting a possibility that the raw data did not exist from the beginning, at least in some portions of these cases....We really cannot know what percentage of those manuscripts have fabricated data....Approximately 53% of the 227 respondents from the life sciences field answered that they suspect more than two-thirds of the manuscripts that were withdrawn or did not provide sufficient raw data might have..fabricated the data."

Tuesday, February 4, 2020

When Animals Cast Doubt on Dogmas About Brains

These days our science news sources typically try to get us all excited about many a science study that is not worthy of our attention.  But when a study appears that tells us something important, such a study will receive almost no attention if the study reports something that conflicts with prevailing dogmas about reality.  So some recent not-much-there study involving zapping dead brain tissue got lots of attention in the press, but a far more important neuroscience study received almost no attention.  The more important study was one that showed a rat with almost no brain had normal cognitive and memory capabilities. 

The study was reported in the Scientific Reports sub-journal of the very prestigious journal Nature, and had the title "Life without a brain: Neuroradiological and behavioral evidence of neuroplasticity necessary to sustain brain function in the face of severe hydrocephalus."  The study examined a rat named R222 that had lost almost all of its brain because of a disease caused hydrocephalus, which replaces brain tissue with a watery fluid. The study found that despite the rat having lost almost all of its brain, "Indices of spatial memory and learning across the reported Barnes maze parameters (A) show that R222 (as indicated by the red arrow in the figures) was within the normal range of behavior, compared to the age matched cohort."   In other words, the rat with almost no brain seemed to learn and remember as well as a rat with a full brain. 

This result should not come as any surprise to anyone familiar with the research of the physician John Lorber. Lorber studied many human patients with hydrocephalus, in which healthy brain tissue is gradually replaced by a watery fluid. Lorber's research is described in this interesting scientific paper. A mathematics student with an IQ of 130 and a verbal IQ of 140 was found to have “virtually no brain.” His vision was apparently perfect except for a refraction error, even though he had no visual cortex (the part of the brain involved in sight perception).

In the paper we are told that of about 16 patients Lorber classified as having extreme hydrocephalus (with 90% of the area inside the cranium replaced with spinal fluid), half of them had an IQ of 100 or more. The article mentions 16 patients, but the number with extreme hydrocephalus was actually 60, as this article states, using information from this original source that mentions about 10 percent of a group of 600. So the actual number of these people with tiny brains and above-average intelligence was about 30. The paper states:

"[Lorber] described a woman with an extreme degree of hydrocephalus showing 'virtually no cerebral mantle' who had an IQ of 118, a girl aged 5 who had an IQ of 123 despite extreme hydrocephalus, a 7-year-old boy with gross hydrocephalus and an IQ of 128, another young adult with gross hydrocephalus and a verbal IQ of 144, and a nurse and an English teacher who both led normal lives despite gross hydrocephalus."

Sadly, the authors of the "Life without a brain" paper seemed to have learned too little from the important observational facts they recorded. Referring to part of the brain, they claim that "the hippocampus is needed for memory," even though their rat R222 had no hippocampus that could be detected in a brain scan. They stated, "It was not possible from these images of R222 to identify the caudate/putamen, amygdala, or hippocampus."  Not very convincingly, the authors claimed that rat R222 had a kind of flattened hippocampus, based on some chemical signs (which is rather like guessing that some flattened roadkill was a particular type of animal). 

But how could this rat with almost no brain have performed normally on the memory and cognitive tests?  The authors appeal to a miracle, saying, "This rare case can be viewed as one of nature’s miracles." If you believe that brains are what store memories and cause thinking, you must regard cases such as this (and Lorber's cases) as "miracles," but when a scientist needs to appeal to such a thing, it is a gigantic red flag. Much better if we have a theory of the mind under which such results are what we would expect rather than a miracle.  To get such a theory, we must abandon the unproven and very discredited idea that brains store memories and that brains create minds. 

The Neuroskeptic blogger at Discovery magazine's online site mentions this rat R222, and the case of humans who performed well despite having the vast majority of their brain lost due to disease.  Let's give him credit for mentioning the latter. But we shouldn't applaud his use of a  trick that skeptics constantly employ: always ask for something you think you don't have. 

This "keep moving the goalposts" trick works rather like this. If someone shows a photo looking like a ghost on the edge of a photo, say that it doesn't matter because the ghost isn't in the middle of the photo. If someone then shows you a photo that appears to show a ghost in the middle of the photo, say that it doesn't matter, because the photo isn't a 6-megabyte high resolution photo.  If someone then shows you a 6-megabyte high resolution photo that appears to show a ghost in the middle of the photo, say that it doesn't matter, because it's just a photo and not a movie. If someone then shows you a movie of what looks like a ghost, say that it doesn't matter, because there were not multiple witnesses of the movie being made. If someone then shows you a movie of what looks like a ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the movie isn't a movie-theater-quality 35 millimeter Technicolor Panavision movie.  If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the ghost wasn't levitating. If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a levitating ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter because the ghost wasn't talking.  If someone then shows you a movie-theater-quality 35 millimeter Technicolor Panavision movie of what looks like a levitating talking ghost, the photography of which was observed by multiple witnesses, say that it doesn't matter, because the levitating talking ghost didn't explain the meaning of life to your satisfaction. 

The Neuroskeptic uses such a technique when he writes the following:

"In the case of the famous human cases of hydrocephalus, the only evidence we have are the brain scans showing massively abnormal brain anatomy. There has never, to my knowledge, been a detailed post-mortem study of a human case."

If there were such an alleged "shortfall," it would be irrelevant, because we can tell perfectly well from a brain scan the degree of brain tissue loss when someone has lost most of their brain, as happened in the case of Lorber's patients and other hydrocephalus patients.  Complaining about the lack of an autopsy study in such patients is like saying that you don't know that your wife lacks a penis, because no one did an autopsy study on her.  Neuroskeptic's claim of no autopsy studies on hydocephalus patients is incorrect. When I do a Google search for "autopsy of hydrocephalus patient," I quickly find several studies which did such a thing, such as this one which reports that one of 10 patients with massive brain loss due to hydrocephalus was "cognitively unimpaired."  Why did our Neuroskeptic blogger insinuate that such autopsy studies do not exist, when discovering their existence is as easy as checking the weather? 

There are many animal studies (such as those of Karl Lashley) that conflict with prevailing dogmas about the brain. One such dogma is that the cerebral cortex is necessary for mental function.  But some scientists once tried removing the cerebral cortex of newly born cats. The abstract of their paper reports no harmful results:

"The cats ate, drank and groomed themselves adequately. Adequate maternal and female sexual behaviour was observed. They utilized the visual and haptic senses with respect to external space. Two cats were trained to perform visual discrimination in a T-maze. The adequacy of the behaviour of these cats is compared to that of animals with similar lesions made at maturity."

Figure 4 of the full paper clearly shows that one of the cats without a cerebral cortex learned well in the T-maze test, improving from a score of 50 to almost 100. Karl Lashley did innumerable experiments after removing parts of animal's brains. He found that you could typically remove very large parts of an animal's brain without affecting the animal's performance on tests of learning and memory. 

Ignoring such observational realities, our neuroscientists cling to their dogmas, such as the dogma that memories are stored in brains,  and that the brain is the source of human thinking.  Another example of a dubious neuroscience dogma is the claim that the brain uses coding for communication.   A scientific paper discusses how common this claim is:

"A pervasive paradigm in neuroscience is the concept of neural coding (deCharms and Zador 2000): the query 'neural coding' on Google Scholar retrieved about 15,000 papers in the last 10 years. Neural coding is a communication metaphor. An example is the Morse code (Fig. 1A), which was used to transmit texts over telegraph lines: each letter is mapped to a binary sequence (dots and dashes)."

But the idea that the brain uses something like a Morse code to communicate has no real basis in fact.  The paper quoted above almost confesses this by stating the following:

"Technically, it is found that the activity of many neurons varies with stimulus parameter, but also with sensory, behavioral, and cognitive context; neurons are also active in the absence of any particular stimulus. A tight correspondence between stimulus property and neural activity only exists within a highly constrained experimental situation. Thus, neural codes have much less representational power than generally claimed or implied."

That's moving in the right direction, but it would be more forthright and accurate to say that there is zero real evidence that neurons are using any type of code to represent human learned information, and that the whole idea of "neural coding" is just a big piece of wishful thinking where scientists are seeing what they hope to see, like someone looking at a cloud and saying, "That looks like my mother."

Saturday, January 18, 2020

"Particle Experiences" and Other Dubious Ideas of Panpsychism

The book Galileo's Error: Foundations for a New Science of Consciousness by philosopher Philip Goff is a book with quite a few misfires. The biggest one is an extremely common one among today's philosophers. The error is to use the way-too-small term “problem of consciousness” in discussing current shortfalls in explaining the human mind.

What we actually have is an extremely large “problem of explaining human mental capabilities and human mental experiences” that is vastly larger than merely explaining consciousness. The problem includes all the following difficulties and many others:

  1. the problem of explaining how humans are able to have abstract ideas;
  2. the problem of explaining how humans are able to store learned information, despite the lack of any detailed theory as to how learned knowledge could ever be translated into neural states or synapse states;
  3. the problem of explaining how humans are able to reliably remember things for more than 50 years, despite extremely rapid protein turnover in synapses, which should prevent brain-based storage of memories for any period of time longer than a few weeks;
  4. the problem of how humans are able to instantly retrieve little accessed information, despite the lack of anything like an addressing system or an indexing system in the brain;
  5. the problem of how humans are able to produce great works of creativity and imagination;
  6. the problem of how humans are able to be conscious at all;
  7. the problem of why humans have such a large variety of paranormal psychic experiences and capabilities such as ESP capabilities that have been well-established by laboratory tests, and near-death experiences that are very common, often occurring when brain activity has shut down;
  8. the problem of how humans have such diverse skills and experiences as mathematical reasoning, moral insight, philosophical reasoning, and refined emotional and spiritual experiences;
  9. the problem of self-hood and personal identity, why it is that we always continue to have the experience of being the same person, rather than just experiencing a bundle of miscellaneous sensations;
  10. the problem of intention and will, how is it that a mind can will particular physical outcomes.

It is therefore a ridiculous oversimplification for philosophers to be raising a mere "problem of consciousness” that refers to only one of these problems, and to be speaking as if such a “problem of consciousness” is the only difficulty that needs to be tackled by a philosophy of mind. But that is exactly what Philip Goff does in his book. We have an indication of his failure to pay attention to the problems he should be addressing by the fact that (according to his index) he refers to memory on only two pages of his book, both of which say nothing of substance about human memory or the problems of explaining it. His index also contains no mention of insight, imagination, ideas, will, volition or abstract ideas. The book's sole mention of the problem of self-hood or the self is (according to the index) a single page referring to “self, as illusion.” The book's sole reference to paranormal phenomena is a non-substantive reference on a single page. Ignoring the vast evidence for psi abilities, near-death experiences and other paranormal phenomena (supremely relevant to the philosophy of mind) is one of the greatest errors of academic philosophers of the past fifty years.

Imagine a baseball manager who has a “philosophy of winning baseball games” that is simply “make contact with the ball.” If you had such a philosophy, you would be paying attention to only a very small fraction of what you need to be paying attention to in order to win baseball games. And any philosopher hoping to advance a credible philosophy of mind has to pay attention to problems vastly more varied than a mere “problem of consciousness” or problem of why some beings are aware.

Goff's philosophical approach is to try and sell the old idea of panpsychism. Around for a very long time, panpsychism is the idea that consciousness is in everything or that consciousness is an intrinsic property of matter. A panpsychist may argue that just as mass is an intrinsic property of matter, consciousness is an intrinsic property of matter.  

As shown by psychology textbooks that may run to 500 pages, the human mind (including memory) is an incredibly diverse and complicated thing, consisting of a huge number of capabilities and aspects. It has always been quite an error when people try to describe so complicated a thing as something simple and one-dimensional.  This is what panpsychists have always done when they try to reduce the mind to the word "consciousness," which they then describe as a "property." A property is a simple aspect of something that can be described by a single number (for example, weight is a property of matter, and length is a property of matter, both of which can be stated as a single number).  A mind is something vastly more complicated than a property.  

Goff commits this same simplistic error by trying to shrink the human mind to the word "consciousness" throughout his book, and then telling us on page 23 that consciousness is a "feature of the physical world," and telling us on page 113 that "consciousness is a fundamental and ubiquitous feature of physical reality." When I look up "feature," I find that it is defined to mean the same thing as "property": "a distinctive attribue or aspect of something."  Human minds are vastly more complicated than any mere "feature" or "property" or "aspect" or "attribute."  We are being fed simplistic pablum when we are told that our minds are some "feature" or "aspect" or "property." If you've started out with the vast diversity and extremely multifaceted richness of the human mind, and somehow ended with up a one-dimensional word such as "feature" or "aspect" or "property,"  you've gone seriously wrong somewhere. Call it a shrinkage snafu. 

So many professors act like masters of concealment by acting in so many ways to misrepresent the gigantic mental and biological complexity of human beings, as if they were so interested in covering up our complexities.   And so we always have utterly misleading cell diagrams included in our biology textbooks, which make it look like there are only a few organelles per cell (the paper here tells us that there are typically hundreds or thousands of organelles per cell). And so we have "cell types" diagrams, which make it look as if there are only a few types of cells (the human body actually has hundreds of types of cells). And so we have the false myth that DNA is a blueprint or a recipe for making humans,  false not only because of the lack of any such human specification in DNA, but also because of the naive error of speaking as if you could ever build an ever-changing supremely dynamic organism like a human (as internally dynamic as a very busy factory) through some mere recipe or mere blueprint like you would use to construct a static house or a static piece of food.  And so we have the complexity-concealing claim that the vastly organized systemic arrangements of the human body can be explained by the "stuff piles up" idea of the accumulation of mutations (as if something as complex as a city could be explained by something like what we use to explain snow drifts). And so we have the frequent reality-denying assertions that mentally humans are "just another primate" or that other mammals are "just like us." And so you have the great complexity concealment of speaking as if a human mind was mere awareness or consciousness that could be described as a "property" or "feature." 

Panpsychism creates the problem that we have to then end up believing that all kinds of inanimate things are conscious to some degree. If consciousness were to be some intrinsic property of matter, it would seem to follow that the more matter, the greater the consciousness. So we would have to believe that the large rocks in Central Park of New York City are far more conscious than we are. And we would also have to believe that the Moon is vastly more conscious than we are. But if such inanimate things are far more conscious than we are, why do they not give us the slightest indication that they are conscious? There is no sign of any intelligent motion in the comets or asteroids that travel through space. Instead they seem to operate according to purely physical principles, just exactly as if they had no consciousness whatsoever. That's why astronomers can predict very exactly how closely an asteroid will pass by our planet, and the exact day that it will pass by our planet. So it seems that Goff's claim on page 116 that panpsychism is “entirely consistent with the facts of empirical science” is not actually true. To the contrary, we see zero signs of any consciousness or will in any non-biological thing, no matter how great its size, contrary to what we would expect under the theory of panpsychism.

No sign of any Mind here (credit:NASA)

On page 113 Goff suggests that maybe it is just certain arrangements of matter that might be conscious.  Goff isn't being terribly clear when he tells us on page on page 113, "Most panpsychists will deny that your socks are conscious, while asserting that they are ultimately composed of things that are conscious." So what does that mean, that the threads of your socks are conscious? If a panpsychist tries to defend his beliefs by denying that all material things are conscious, this actually pulls the legs from out under the table of panpsychism, depriving it of any small explanatory value it might have.  Once you go from "all matter is conscious" to "only certain arrangements of matter," you still have the same problem in materialism, that there is no reason anyone can see why consciousness would appear from some particular arrangement of matter. 

It would seem that the panpsychist has a kind of dilemma: either maintain that consciousness is an intrinisc property of matter (leaving you perhaps with some very small explanatory power, but many absurd consequences such as large rocks being more conscious than humans), or maintain that only special arrangements of matter are conscious (which would seem to remove any explanatory reason for believing in panpsychism in the first place). 

On page 150 to 153 Goff shows himself to be an uncritical consumer of one of the biggest legends of neuroscience, that split-brain patients have a dual consciousness. They have no such thing, as we can discover by watching Youtube interviews with split-brain patients who clearly have a single self. A scientific study published in 2017 set the record straight on split-brain patients. The research was done at the University of Amsterdam by Yair Pinto. A press release entitled “Split Brain Does Not Lead to Split Consciousness” stated, “The researchers behind the study, led by UvA psychologist Yair Pinto, have found strong evidence showing that despite being characterised by little to no communication between the right and left brain hemispheres, split brain does not cause two independent conscious perceivers in one brain.” The actual facts about split-brain surgery are related here by a surgeon who has performed such an operation. He states this about split-brain patients:

"After the surgery they are unaffected in everyday life, except for the diminished seizures. They are one person after the surgery, as they were before."

Panpsychism does very little to help with the explanatory problems in the philosophy of mind. The main reason is that it does not help with more than one of the ten problems listed at the beginning of this post. For example, panpsychism is worthless in explaining how humans are able to instantly retrieve memories, or why humans are able to form abstract ideas.

In the last paragraph of the book, Goff makes a pitch that kind of follows that classic salesman's advice to “sell the sizzle not the steak.” He states the following (imagine some violins playing as you read this passage):

Panpsychism offers a way of 're-enchanting the universe.' On the panpsychist view, the universe is like us; we belong in it. We need not live exclusively in the human realm, ever more diluted by globalization and consumerist capitalism. We can live in nature, in the universe. We can let go of nation and tribe, happy in the knowledge that there is a universe that welcomes us.”

But I fail to see any reason why a belief in panpsychism would produce any good change in human behavior. I can also imagine it having a bad effect. If you believe that all matter is conscious, you might have no particular guilt about killing someone. You might think to yourself: “He will still be conscious, even if I kill him, because all matter is conscious.” Similarly, if you believe that all matter is conscious, you might think it would be no great tragedy if all humanity were to become extinct, on the grounds that this would produce only a slight reduction in the total consciousness that exists in the universe (humanity having less than .0000000000000000000000000000000000001 of the universe's matter).

When panpsychists use simplistic shrinkage to describe mind as a mere "property" or "feature," it is like someone telling you that New York City is just a geographical coordinate, or like someone telling you that Brazil is just a pair of sounds someone can make with his mouth. 

Scientific American has an interview with Goff about his book. Goff states the following;

"The basic commitment is that the fundamental constituents of reality—perhaps electrons and quarks—have incredibly simple forms of experience. And the very complex experience of the human or animal brain is somehow derived from the experience of the brain’s most basic parts."

We can try to imagine such a whimsical possibility. A quark might have an experience of a dull, static existence stuck inside an atomic nucleus. An electron might have an experience of constantly whizzing around a nucleus at incredible speeds, like some person stuck on an amusement park ride. Or a neuron might have an experience of just sitting there motionless inside a brain.  If there were billions or trillions or quadrillions of such tiny micro-experiences, they would never add up to anything like the experience of being a mobile thinking human free to walk around anywhere he wishes. 

Saturday, December 7, 2019

The Guy with the Smallest Brain Had the Highest IQ

According to the theory that your brain creates your mind and stores your memories, we should expect that removal of half of the brain should have the most drastic effect on memory and intelligenc. But at the link here and the link here you can read about many cases showing a good preservation of memory and intelligence even after half a brain was removed to treat epileptic seizures. 

There is a new study relating to the topic of intelligence and removal of half of the brain.  Once again, the study reports facts shockingly inconsistent with standard claims that the brain is the source of the human mind. But the press reporting on this study is feeding us a kind of "cover story" trying to explain away the shocking result.  Upon close inspection, this "cover story" falls apart. 

The study involved brain scans of six patients who had half of their brains removed.  Table S3 of the supplemental information of the study reveals that the intelligence quotients (IQ scores) of the six subjects were 84, 95, 91, 99,  96 and 80. So most of the six were fairly smart, even though half of their brains were gone.  How could this be when half of their brains were missing? 

In stories such as the story in Discover magazine, it is suggested that "brain rewiring" can explain such a thing. The story states the following:

"In a study published Tuesday in Cell Reports, scientists studied six of these patients to see how the human brain rewires itself to adapt after major surgery. After performing brain scans on the patients, the researchers found that the remaining hemisphere formed even stronger connections between different brain networks — regions that control things like walking, talking and memory —  than in healthy control subjects. And the researchers suggest that these connections enable the brain, essentially, to function as if it were still whole."

The summary above is not accurate, as it tells a story that is not true for one of the six patients, as I will explain below. This hard-to-swallow story (repeated by the New York Times) is reassuring if you wish to keep believing that the brain is the source of your mind.  The person who buys such a story can reassure himself kind of like this:

"How do people stay smart when you take out half of their brain? It's simple: the brain just rewires itself so that the half works as good as a whole. It acts kind of like a computer that reprograms itself to keep functioning like normal when you yank out half of its components."

We know of no machines ever built that have such a capability.  All brains engage in some "brain rewiring" every year, so any mental effect can always be attributed to "brain rewiring." We cannot dream of how a brain could possibly be clever enough to rewire itself to perform just as well when half of its matter was removed.   When we take a close look at the data in the study, it shows that this "brain rewiring" story does not hold up for the smartest subject in the study. 

In Table S4 of the study, we have measurements based on brain scanning, designed to show the level of connectivity in the brains of the six subjects.  Some of the six subjects have a slightly higher average connectivity score, but it's not very much higher.  The average connectivy scores for the controls with normal brains were .30 and .35.  The average connectivity scores for the six patients with half a brain were .43, .45, .35, .30, .43, and .41.  So it was merely true that the average brain connectivity score of the patients with half a brain was slightly higher than the normal controls.  And when we look at another metric (the "max" score listed at the end of Table S4), we see that all of the half-brain subjects had lower "brain connectivity" scores than the controls.  The "max" connectivy scores for the controls with normal brains were .90 and .74, but the "max" connectivity scores for the six patients with half a brain were only .57, .67, .49, .51, .63, and .62.  So the evidence for greater brain connectivity or "nicely rewired brains" after removal of half a brain is actually quite thin. 

Interestingly, the half-brain patient with the highest intelligence (labeled as HS4, with an IQ of 99) had an average brain connectivity score of only .30, which is the same as one of the group of controls with normal brains, and less than the brain connectivity of the other group of controls with normal brains.   So the smartest person with half a brain (who had an IQ of 99) did not at all have any greater brain connectivity that can explain his normal intelligence with only half a brain.  How can this subject HS4 have had a normal intelligence with only half a brain?  In this case, favorable brain rewiring or greater brain connectivity cannot explain the result.   So the "cover story" of "their brains rewired to keep them smart" falls apart. 

half brain
The half brain of subject HS4, IQ of 99, average brain wiring

The only way we can explain such results is by postulating that the human brain is not actually the source of the human mind.  If the human brain is neither the source of the human mind nor the storage place of memories, we should not find any of the results mentioned in this post to be surprising. 

Subject HS4 is not by any means the most remarkable case of a patient with half a brain and a good mind. The study here is entitled "Development of above normal language and intelligence 21 years after left hemispherectomy."  After they removed the part of the brain claimed to be the "center of language," a subject developed "above normal" language and intelligence. 

Then there is the case of Alex who did not start speaking until the left half of his brain was removed. A scientific paper describing the case says that Alex “failed to develop speech throughout early boyhood.” He could apparently say only one word (“mumma”) before his operation to cure epilepsy seizures. But then following a hemispherectomy (also called a hemidecortication) in which half of his brain was removed at age 8.5, “and withdrawal of anticonvulsants when he was more than 9 years old, Alex suddenly began to acquire speech.” We are told, “His most recent scores on tests of receptive and expressive language place him at an age equivalent of 8–10 years,” and that by age 10 he could “converse with copious and appropriate speech, involving some fairly long words.” Astonishingly, the boy who could not speak with a full brain could speak well after half of his brain was removed. The half of the brain removed was the left half – the very half that scientists tell us is the half that has more to do with language than the right half. 

What is also interesting in the new study is that when we cross-compare Figure 1 with Table S3 (in the supplemental information) we find that the patient with the largest brain (after the hemispherectomy operation) had the lowest IQ, and that the patient with the smallest brain had the highest IQ.  In Figure 1 the brain of the subject with an IQ of 80 (subject HS6) looks much larger than the brain of the subject with an IQ of 99 (subject HS4).  Such a result is not suprising under the hypothesis that your brain is not the source of your mind.  It should also be not surprising to anyone who considers the fact that the brain of the Neanderthals (presumably not as smart as we modern humans) was substantially larger than the brain of modern humans.