Thursday, February 20, 2025

Brain Tumors Seem to Have Relatively Little Effect on Cognitive Function

One way to test the "brains make minds" hypothesis is to examine the effect of brain tumors on cognitive performance. Under the hypothesis that the brain makes the mind and the brain stores memories, we should expect brain tumors to have a huge effect on cognitive performance. That does not seem to be the case at all. 

The most common test of cognitive performance used by doctors is a test called the MMSE, which stands for the Mini Mental State Examination test. The link here gives you some of the questions used on the test. An example question is that you are asked to count backwards from 100, going back 7 in each steps (for example, 93, 86, 79,  72, and 65). The MMSE test has a maximum score of 30. Adults with normal cognitive functions will tend to score about 29 on the test. The link here says that a score of 24 or higher is considered "normal."

Another widely used test of mental function is the Raven's Colored Progressive Colored Matrices test, called the RCPM. The test has a maximum score of 36. According to the paper here, an average score for an elderly person is about 26. 

The study here ("THE EFFICACY OF RAVEN’S COLORED PROGRESSIVE MATRICES FOR PATIENTS WITH BRAIN TUMOR") gives MMSE and RCPM scores for 43 patients, before and after surgery for a brain tumor.  We read about some remarkable results: "Median pre- and post-operative MMSE scores were 29 points (14– 30) and 29 points (21–30), respectively. Median pre- and post-operative RCPM scores were 33 points (25–36) and 35 points (18–36), respectively."

Let us consider how much this result contradicts the "brains make minds" dogma. The results for the Mini Mental State Examination (MMSE) test were almost perfect for the 43 subjects with brain tumors. They scored a median of 29, only one point less than a perfect score of 30.  For the Raven's Colored Progressive Matrices test, the median score of the brain tumor patients after brain surgery was an almost perfect score of 35, only one point less than the maximum of 36. Moreover, after the brain surgery the score for these patients improved from 33 to 35.  Nothing is done in a brain tumor surgery to cognitively fix a brain. The sole purpose of the surgery is to remove the cancerous tumor, hopefully in some way that will prevent the tumor from reappearing. Very often the amount of brain tissue removed is greater than the amount that looks grossly cancerous. Under the hypothesis that the brain makes the mind, we should not at all expect patients to be getting better scores on mental tests after they had surgery to remove a brain tumor. 

Another interesting study is the study "Cognitive reserve and individual differences in brain tumour patients." The study involved about 700 brain tumor patients who were cognitively tested. The patients included 143 low-grade glioma patients, and 181 high-grade glioma patients. High-grade glioma patients are those with really bad brain tumors.  The study has the limitation that it fails to give us the average or median cognitive test scores that it analyzed.  All that we are given is some analysis expressed by using correlation coefficients. A correlation coefficient is a number between 0 and 1 telling us about how much one thing is correlated with another. A correlation of 0 indicates no causal relation, and a correlation of 1 indicates a perfect causal relation. 

Table 4 of the study indicates there was virtually no correlation between the volume of the tumor and performance on the Raven's Colored Progressive Matrices test, a negligible correlation of only −0.0345. The same table indicates there was virtually no correlation between performance on the Raven's Colored Progressive Matrices test and whether the tumor was a high-grade glioma, a negligible correlation of −0.0310. We see a much higher correlation of .349 for "fronto-parietal" tumors, but Table 1 says there were only six patients with "fronto-parietal" tumors, so it's too small a sample size of "fronto-parietal tumor" patients to be very significant evidence. 

Table 3 of the paper here ("Pre-Surgery Cognitive Performance and Voxel-Based Lesion-Symptom Mapping in Patients with Left High-Grade Glioma") gives the results of cognitive tests on 85 people with high-grade glioma in the left hemisphere. Under the dogma that the brain makes the mind, we would expect most of these people with severe brain tumors to have performed poorly on such tests. But the table does not show that. Instead we see that on 17 out of 18 tests most of the patients did not perform in a "pathological" manner.  Only on a "verb naming" task did most of the subjects perform poorly, with 61% performing poorly. On 17 out 18 tests an average of only about 25% of the subjects performed poorly. 

The paper here ("Quality of life in patients with stable disease after
surgery, radiotherapy, and chemotherapy for
malignant brain tumour") analyzed cognitive data on 57 brain tumor patients with malignant brain tumors, a particularly severe type. We read this: "Separate Mann Whitney tests did not show any differences between the tumour and control groups in terms of score for FLIC (U=476.5, p=0.031), ADL (U=674, p=0.89), STAI1 (U=502, p=0.059), STAI2 (U=641, p=0.65), SRDS (U=618, p=0.49), Raven’s coloured progressive matrices (U=533, p=0.11), attentional matrices (U=624, p=0.53), trail making test part A (U=673.5, p=0.91) and B (U=624, p=0.53), or story recall scores (U=637, p=0.62)."  The average score on the Raven’s Colored Progressive Matrices test for the brain tumor patients was about 28 (27.86).  The patients with severe malignant brain tumors scored higher than control subjects on this test, who got an average score of only 26.0.  According to the paper here, an average score for an elderly person is about 26. So the people with the malignant brain tumors (a particularly severe type) scored higher on the cognitive test than normal people of their age. 

The paper here ("Evaluation of mini-mental status examination score after gamma knife radiosurgery as the first radiation treatment for brain metastases") gave the MMSE cognitive test on 119 patients before and after treatment for brain surgery. We read, "In 16 of 37 patients (43.2 %) with pre-GKS MMSE scores ≤27, the MMSE scores improved by ≥3 points, whereas 15 of all patients (19.7 %) experienced deteriorations of ≥3 points." It sounds like the number of increases in cognitive scores was as high as the number of decreases.

The study here ("Episodic Memory Impairments in Primary Brain Tumor Patients") studied problems with memory in 158 people having brain tumors. Using a method that sounds as if it was trying to report as many people as possible as having memory problems, the study claims that only 42% of those with brain tumors had any memory problems. It reports that "No correlations between specific tumor locations and types of episodic memory impairment were found, except for the association of encoding deficits with corpus callosum infiltration (Logistic regression: OR 4.36, β = 1.68, 95% IC 1.37–12.58, p = .02)."  Since the people with brain tumors are typically old people, and maybe something like 40% of old report report some type of memory problem, we have no clear evidence that brain tumors are causing memory problems.  This study follows a frustrating methodology in which it refuses to report the degree of dysfunction in any of the people reported as having memory problems. We have a claim about what percentage of brain tumor patients have some kind of memory problem, without any details on how bad such problems were.   This is just what we would expect if only tiny performance differences were found. 

We get a "memory problem criteria" description that sounds like it is trying to place as many people as possible in a category of "people with memory problems":

"Each of the nine scores recorded (number of word recalled at immediate recall), free recalls (1, 2, 3, delayed) and cued recalls (1, 2, 3, delayed) were considered abnormal when it corresponded to a performance equal to or under the fifth percentile of the healthy controls normative data (van der Linden et al., 2004). An encoding deficit was diagnosed when the immediate recall was abnormal (the assumption being that the items were not present in the working memory immediately after they have been red, and so not encoded). A failure in free recalls corresponded to at least 2/4 abnormal scores and a failure in cued recalls corresponded to at least 2/4 abnormal scores (the test is composed of four free and four cued recalls). A storage deficit was diagnosed in the case of a failure in free recalls associated with a failure in cued recalls. This means that the cues didn’t improve the number of items recalled, assuming that the items was not stored. A retrieval deficit was diagnosed in the case of a failure in free recalls isolated (normal cued recalls). Indeed, the cues improved the number of items recalled comparably to healthy controls, giving a proof that items was stored in the memory but not available at the moment. Furthermore, an association of storage and retrieval deficit was diagnosed in the case of a failure in free recalls and a failure in cued recalls, but with limited improvement (incomparably to healthy controls) of the total number of items recalled with cue."

Despite this method, which sounds as if it was designed to make as many as possible be categorized as people with a memory problem, only 42% of those with brain tumors were classified as having a memory problem. We are left here with no good evidence of brain tumors causing substantial memory problems.  The finding that "no correlations between specific tumor locations and types of episodic memory impairment were found" (with only one minor exception) is consistent with the idea that memories are not actually stored in brains.

Another study of 121 patients with severe brain tumors (Stage III and Stage 4) gave four tests of working memory and two tests of episodic memory, finding that only 10%, 17%, 22%,  23%, 28% and 18% had a "clinically relevant deficit." Referring to radiation therapy to treat brain cancer, the paper "Effects of Radiotherapy on Cognitive Function in Patients With Low-Grade Glioma Measured by the Folstein Mini-Mental State Examination" says that "Only a small percentage of patients had cognitive deterioration after radiotherapy."

The study "Efficacy and Cognitive Outcomes of Gamma Knife Radiosurgery in Glioblastoma Management for Elderly Patients" studied 49 patients with the most severe type of brain tumor, a glioblastoma. Table 1 tells us the patients had a median tumor size of 5.4 centimeters (about two inches). According to Table 2, a year after radiation therapy, the average MMSE score of the patients was about 25, slightly below average, but still pretty good. Before the surgery, when the patients had lost a lot of their brain tissue due to tumors, the average MMSE score was a fairly good 27 (30 being the highest score possible). 

The study "Detrimental Effects of Tumor Progression on Cognitive
Function of Patients With High-Grade Glioma" is one of those studies that makes it hard to extract the most relevant data from it. The most relevant fact that I can extract from it is found in Table 1, where I see that the number of patients with very bad brain tumors (high-grade glioma) and normal cognitive scores (as tested by the MMSE) was 757, and the number with abnormal MMSE scores was only 389. 659 of these 757 patients with normal cognitive scores had Grade 4 brain tumors, the worst type. 

The paper "Prospective memory impairment following whole brain
radiotherapy in patients with metastatic brain cancer" gives us the MMSE cognitive test scores for 81 patients before and after they had treatment for metastatic brain cancer. The average score before the treatment was 27, and the average score after the treatment was 26 (Table 2). So there was no big difference. The link here says that a score of 24 or higher on the MMSE is considered "normal." According to Table 1, 23 of the patients had a brain tumor larger than 3 centimeters (1 inch). 

Below is a quote from the paper "Meningiomas and Cognitive Impairment after Treatment: A Systematic and Narrative Review."  The paper summarizes other papers studying the cognitive effects of a common type of brain surgery. The quote below refers to studies that compared cognitive function before and after brain surgery. We see some studies discussing a negative cognitive effect, but relatively few. The reported fractions are not very high.  There is also reference to a number of studies showing improvements in cognitive abilities after brain surgery. 

"Worsening of verbal, working and visual memory (9/22 studies) 
Worsening of complex attention and orientation (1/22 studies) 
Worsening of executive functioning (3/22 studies) 
Worsening of language and verbal fluency (2/22 studies) 
Worsening of cognitive flexibility (4/22 studies) 
Worsening in all neurocognitive domains (1/22 studies) 
Improvement in verbal, working and visual memory (3/22 studies) 
Improvement of complex attention and orientation (3/22 studies) 
Improvement of executive functioning (2/22 studies) 
Improvement of cognitive flexibility (1/22 studies)."

The same paper summarizes studies comparing those who had brain surgery with normal control subjects. We seem to have only scanty evidence of worse performance after brain surgery, because the reported fractions are low:

"Worse verbal, working and visual memory (2/22 studies)
Worse complex attention and orientation (1/22 studies)
Worse executive functioning (1/22 studies)
Worse language and verbal fluency (2/22 studies)
Worse cognitive flexibility (4/22 studies)."

The fractions quoted are small fractions such as 5% or 10%, and we don't know how much of a decline the "worse" refers to. When you also take into account that neuroscientists will typically be biased towards reporting declines in function after brain surgery rather than improvements or no differences (in accordance with their dogma that brains produce minds), it is not clear that we have here any very clear evidence of decline in cognitive function after this type of brain surgery. 

Below is a very notable case of the almost complete destruction of a brain by a brain tumor, but with a high preservation of mental function. It comes from page 71 of the document here (and the newspaper story here repeats the same details).

high cognition with little brain

Overall, these results are quite compatible with the idea that you brain does not make your mind, and the idea that your brain is not any storage place of your memories. 

I can recall a personal experience here. Years ago I traveled to see a beloved relative who died by the spread of metastatic breast cancer. When I met her I saw a very noticeable tumor protruding from her head. I seem to recall the skull protrusion being about the size of a fist, or nearly as large. I can assume that a very large part of the brain had been destroyed by the cancer. But when I talked to her, I could notice no change in her cognition. I asked her an important question about events in the past, and she gave a meaningful, detailed answer with relevant examples provided.  I left, and a few weeks later she died. The lack of disturbance in cognition, speech and recollection in someone with a very visually noticeable brain tumor was striking. 

Thursday, February 13, 2025

The Reason You Will Never Be Able to Upload Your Mind Is the Same Reason You Won't Ever Need To

A very accomplished technologist and inventor, Ray Kurzweil has become famous for his prediction that there will before long be a “Singularity” in which machines become super-intelligent (a prediction make in his 2005 book The Singularity Is Near). In his 1999 book The Age of Spiritual Machines,  Ray Kurzweil made some very specific predictions about specific years: the year 2009, the year 2019, the year 2029, and the year 2072. Let's look at how well his predictions for the year 2019 hold up to reality. 

Prediction #1: “The computational ability of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second)."

Reality: A $4000 computing device in 1999 dollars is equivalent to about a  7700 dollar computing device today. There is no $7700 computing device that can compute even a hundredth as fast as 
20 million billion calculations per second. The fastest current processor for a machine under $8000 is the Intel Core i9-14900KS, with a clock speed of about 6 gigahertz, only about 6 billion operations per second. If you shell out about 8000 dollars for a high-end gaming computer, you can get a few teraflops of floating point calculations per second.  Even if we use that figure rather than the clock speed, we still have computing capability more than 1000 times smaller than the computing capability predicted by Kurzweil for such a device in 2019. 

Prediction #2: “Computers are now largely invisible and are embedded everywhere – in walls, tables, chairs, desks, clothing, jewelry, and bodies.” 

Reality: Nothing like this has happened, and while computers are smaller and thinner, they are not at all "largely invisible."

Prediction #3: “Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory 'lenses,' are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality.” 

Reality: Such things are not at all used “routinely,” and I am aware of no cases in which they are ever used. There is very little communication through virtual reality displays, and when it is done it involves bulky apparatus like the Occulus Rift device, which resembles a scuba mask.

Prediction #4: “People routinely use three-dimensional displays built into their glasses,  or contact lenses. These 'direct eye' displays create highly realistic, virtual visual environments overlaying the "real" environment.

Reality: Very few people use any such technology, even in the year 2025. 

Prediction #5: “High-resolution, three dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity." 

Reality: This sounds like a prediction of some reality similar to the Holodeck first depicted in the TV series Star Trek: The New Generation, or a prediction that realistic virtual sex will be available by 2019. Alas, we have no such things.

Prediction #6: “Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers.”

Reality: Paper books and documents are used much less commonly than in 1999, but it is not at all true that most learning  occurs through “intelligent, simulated software-based teachers.” 

Prediction #7: “The vast majority of transactions include a simulated person.”

Reality: A large percentage of transactions are electronic, but very few of them involve a simulated person.

Prediction #8: “Automated driving systems are now installed on most roads.”

Reality: Although there are a few self-driving cars on the road, 99% of traffic is old-fashioned traffic with human drivers.

Prediction #9: "Most flying weapons are small -- some as small as insects -- with microscopic flying weapons being researched."

Reality: The public has not yet even heard of tiny flying weapons.

Prediction #10: "The expected lifespan...has now substantially increased again, to over one hundred." 

Reality: A November 2023 article is entitled "Life expectancy for men in U.S. falls to 73 years — six years less than for women, per study."

Prediction #11: "Keyboards are rare."

Reality: No, keyboards were not rare either in 2019 or today. 

Prediction #12: "The majority of 'computes' of computers are now devoted to massively parallel neural nets and genetic algorithms."

Reality: Not true. So-called genetic algorithms are pretty useless as a computing methodology.  Software is not significantly developed through any Darwinian means, because random mutations with survival of the luckier results is not a workable method for creating very complex systems or very complex functional innovations, contrary to the claims of Darwinist biologists. Darwinism has flunked the software test.  

So Kurzweil's predictions for 2019 were very far off the mark. But Kurzweil is still playing the role of Grand Techno-Prophet. In a Freethink.com article last year, Kurzweil predicted that in the 2030's nanobots (microscopic robots injected into the body) will produce a great increase in lifespan. In the same article he predicted that the uploading of human minds into computers will occur by the 2040's.  Are there any reasons to think that his predictions for the 2030's and 2040's are unlikely to be correct? There certainly are. 

One reason is that Kurzweil never did much to prove his claim that there is a Law of Accelerating Returns causing the time interval between major events to grow shorter and shorter. On page 27 of The Age of Spiritual Machines he tries to derive this law from evolution, claiming that natural evolution follows such a law.  But we don't see such a law being observed in the history of life.  Not counting the appearance of humans, by far the biggest leap in biological order occurred not fairly recently, but about 540 million years ago, when almost all of the existing animal phyla appeared rather suddenly during the Cambrian Explosion.  No animal phylum has appeared in the past 480 million years. So we do not at all see such a Law of Accelerating Returns in the history of life.  There has, in fact, been no major leap in biological innovation during the past 30,000 years. 

Kurzweil's logic on page 27 contains an obvious flaw. He states this:

"The advance of technology is inherently an evolutionary process.  Indeed, it is a continuation of the same evolutionary process that gave rise to the technology-creating species. Therefore, in accordance with the Law of Accelerating Returns, the time interval between salient advances grows exponentially shorter as time passes." 

This is completely fallacious reasoning, both because the natural history of life has not actually followed a Law of Accelerating Returns, and also because the advance of technology is not a process like the evolutionary process postulated by Darwin.  The evolutionary process imagined by Darwin is blind, unguided, and natural, but the growth of technology is purposeful, guided and artificial.

On the same page, Kurzweil cites Moore's Law as justification for the Law of Accelerating Returns.  For a long time, this rule-of-thumb held true, that the speed of a transistor doubled every two years. But in 2015 Moore himself said, "I see Moore's law dying here in the next decade or so."  In the Wikipedia.org article on Moore's Law, we read, "
Some forecasters, including Gordon Moore,[122] predict that Moore's law will end by around 2025." It is now clear that fundamental limits of making things smaller will cause Moore's Law to stop being true before long. 

 Machines smarter than humans would require stratospheric leaps forward in computer software, but computer software has never grown at anything like an exponential pace or an accelerating pace.  Nothing like Moore's Law has ever existed in the world of software development.  Kurzweil has occasionally attempted to suggest that evolutionary algorithms will produce some great leap that will speed up the rate of software development. But a 2018 review of evolutionary algorithms concludes that they have been of little use, and states: "Our analysis of relevant literature shows that no one has succeeded at evolving non-trivial software from scratch, in other words the Darwinian algorithm works in theory, but does not work in practice, when applied in the domain of software production." 

Page 256 of the document here refers to problems with software which throw cold water on hopes that progress with computers will be exponential:

"Lanier discusses software 'brittleness,'  'legacy code,' 'lock-in,' and 'other perversions' that work counter to the logic of Kurzweil’s exponential vision. It turns out there is also an exponential growth curve in programming and IT support jobs, as more and more talent and hours are drawn into managing, debugging, translating
incompatible databases, and protecting our exponentially better, cheaper, and more connected computers. This exponential countertrend suggests that humanity will
become 'a planet of help desks' long before the Singularity."

We already have something like this with so-called artificial intelligence systems, which by now involve databases and code so complex that no one adequately understands them. We hear talk of AI "hallucinations" that sound unfixable because the AI systems are "black boxes" that humans cannot dive in and debug as they would some code written by humans. 

As for Kurzweil's predictions about nanobots producing a surge in lifespan during the 2030's, there are strong reasons for doubting it.   Futurists have long advanced the idea that tiny nanobots might be injected into people, to circulate through the human body to repair its cells. But there may be technical reasons why such things are technically unfeasible. Nobel Prize winner Richard Smalley argued that the molecular assemblers imagined by nanotechnology enthusiast Eric Drexler were not feasible, because of various scientific reasons such as what he called the “fat fingers” problem.

There is another strong reason for rejecting the idea that nanobots will be able to produce some great increase in human lifespan. The reason is the stratospheric complexity of human biology and the vast levels of organization in human bodies.  Transhumanists have generally tended to  fail to understand the stratospheric amount of organization and functional complexity in living things.  An objective and very thorough scholar of biological complexity will be very skeptical of any claim that devices that humans manufacture will be able to make humans live very much longer. There is so very much about the fundamentals of human life that biologists still don't understand.  The visual below illustrates the situation. The problems listed are problems a hundred miles over the heads of scientists. 

problems scientists don't understand

While scientists can list stages in cell reproduction, scientists are unable to even explain the marvel of cell reproduction, which for many cells is a feat as impressive as an automobile splitting up to become two separate working automobiles. Scientists cannot even explain how protein molecules are able to fold into the three-dimensional shapes needed for their functions, shapes not specified by DNA.  Without resorting to lies such as the lie that DNA is a body blueprint, no scientist can credibly explain how a speck-sized zygote (existing just after female impregnation) is able to progress over nine months to become the vast hierarchical  organization of the human body.  No scientist has  a decent physical explanation of how memories can form, or how they can persist for a lifetime. No scientist has a decent explanation of how humans are able to instantly recall lots of detailed relevant information as soon as they hear a name mentioned, or see of photo of someone, a feat that should be impossible using a brain that is completely lacking in sorting, indexes and addresses (the type of things that make instant recall possible).  

With human knowledge being so fragmentary, and so many basic problems of explaining humans and their minds being so far over the heads of scientists, how improbable is it that humans will be able to overhaul their biology on the microscopic level by using microscopic robots called nanobots, or any other technology we can envision in the next fifty years?

transhumanists

The biggest reason for doubting Kurzweil's predictions beyond 2019 is that they are based on assumptions about the brain and mind that are incorrect. Kurzweil is an uncritical consumer of neuroscientist dogmas about the brain and mind. He assumes that the mind must be a product of the brain, and that memories must be stored in the brain, because that is what neuroscientists typically claim. If he had made an adequate study of the topic, he would have found that the low-level facts collected by neuroscientists do not support the high-level claims that neuroscientists make about the brain, and frequently contradict such claims. To give a few examples:

  • There is no place in the brain suitable for storing memories that last for decades, and things like synapses and dendritic spines (alleged to be involved in memory storage) are unstable, "shifting sands" kind of things which do not last for years, and which consist of proteins that only last for weeks.
  • The synapses that transmit signals in the brain are very noisy and unreliable,  in contrast to humans who can recall very large amounts of memorized information without error.
  • Signal transmission in the brain must mainly be a snail's pace affair, because of very serious slowing factors such as synaptic delays and synaptic fatigue (wrongly ignored by those who write about the speed of brain signals), meaning brains are too slow to explain instantaneous human memory recall.
  • The brain seems to have no mechanism for reading memories.
  • The brain seems to have no mechanism for writing memories, nothing like the read-write heads found in computers.
  • The brain has nothing that might explain the instantaneous recall of long-ago-learned information that humans routinely display, and has nothing like the things that allow instant data retrieval in computers.
  • Brain tissue has been studied at the most minute resolution, and it shows no sign of storing any encoded information (such as memory information) other than the genetic information that is in almost every cell of the body.
  • There is no sign that the brain or the human genome has any of the vast genomic apparatus it would need to have to accomplish the gigantic task of converting learned conceptual knowledge and episodic memories into neural states or synapse states (the task would presumably require thousands  of specialized proteins, and there's no real sign that such memory-encoding proteins exist)
  • No neuroscientist has ever given a detailed explanation of how such a gigantic translation task of memory encoding could be accomplished (one that included precise, detailed examples).
  • Contrary to the claim that brains store memories and produce our thinking, case histories show that humans can lose half or more of their brains (due to disease or hemispherectomy operations), and suffer little damage to memory or intelligence (as discussed here). 

Had he made a study of paranormal phenomena, something he shows no signs of having studied, Kurzweil might have come to the same idea suggested by the neuroscience facts above: that the brain cannot be an explanation for the human mind and human memory, and that these things must be aspects of some reality that is not neural, probably a spiritual dimension of humanity.

Since he believes that our minds are merely the products of our brains, Kurzweil thinks that we will be able to make machines as intelligent as we are, and eventually far more intelligent than we are, by somehow leveraging some "mind from matter" principle used by the human brain. But no one has any credible account of what such a principle could be, and certainly Kurzweil does not (although he tried to create an impression of knowledge about this topic with his book How to Create a Mind).   We already know the details of the structure and physiology of the brain, and what is going on in the brain in terms of matter and energy movement.  Such details do nothing to clarify any "mind from matter" principle that might explain how a brain could generate a mind, or be leveraged to make super-intelligent machines. 

In general, transhumanists tend to be poor scholars of four very important topics:

(1) Transhumanists tend to be poor scholars of biological complexity and the vast amount of organization and fine-tuned functionality in human bodies. So they make naive claims such as that injected microscopic robots will soon be able to fix your cells and double your lifespan, failing to realize the unlikelihood of that, given the vast complexity of interdependent components in the human body and the vast complexity of human cells and the biochemistry of human bodies. 
(2) Transhumanists tend to be poor scholars of genetics and human DNA. A proper study of the topic will help you realize that genes do not explain either the origin of the human body or the characteristics of the human mind. DNA merely has low-level chemical information, not high-level anatomical information, and not any information that can explain mind or memory.  We have an example of very bad transhumanist misunderstanding of DNA on page 2 of Kurzweil's book "How to Create a Mind" where he incorrectly states, "A billion years later a complex molecule called DNA evolved, which could 
precisely encode lengthy strings of information and generate organisms described by these 'programs.' " DNA has no specification of how to make an organism or any of its organs or any of its cells. There are no programs in DNA, but merely lists of chemical ingredients, such as which amino acids make up a particular protein.  The idea that an organism is built by DNA is childish nonsense. DNA has no blueprint for constructing an organism, and even if it did, it would not explain how organisms get built, because blueprints don't build things. Things get built with the help of blueprints only when intelligent agents read blueprints to get ideas about how to build things. 
(3) Transhumanists tend to be poor scholars of human minds,  and the vast diversity of human experiences, normal and paranormal. A proper study of two hundred years of evidence for paranormal phenomena leads to the conclusion that humans are souls, not products of brains. 
(4) Transhumanists tend to be poor scholars of human brains and their physical shortfalls which rule out brains as a credible explanations for human minds.  Ask a transhumanist about things such as the average lifetime of brain proteins or the transmission reliability rate of chemical synapses, and you won't be likely to get the right answer.  


"I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more." 

Not true at all. Countless writers predicted that we would have machines as smart as people by the year 2000, and in 1999 futurists were typically predicting that this would happen by about this year (2025). Page 12 of the paper here gives a graph showing that 8 experts predicted that computers would have human level intelligence by about the year 2000, that 9 other experts predicted that computers would have human level intelligence by the year 2025, and that 12 other experts predicted that computers would have human level intelligence by the year 2030. In the same interview, Kurzweil makes the gigantically false claim that our understanding of biology is progressing exponentially. To the contrary, biologists are stuck from an explanatory standpoint, still light-years away from credibly explaining  how there occurs the most basic aspects of biology such as consciousness, cell reproduction, morphogenesis, human self-hood, human understanding,  instant human recall, the creation and formation of memories by humans, and the progression from a speck-sized zygote to a full human body. Biologists cannot credibly explain the origin of life or the origin of any type of protein molecule or any type of cell in the human body, such things being vastly too complex and vastly too organized to be credibly explained by Darwinian ideas. From an explanatory standpoint, neuroscientists are stuck in the mud, and getting nowhere, as they have been for decades. You might think otherwise from all the press releases about low-quality experimental studies guilty of defects such as way-too-small study group sizes. 

On page 4 of his 2013 book "How to Create a Mind" Kurzweil stated most incorrectly, "We are rapidly reverse-engineering the information processes that underlie biology including that of our brains." Nothing of the sort was being done then, and nothing of the sort is being done now. Biologists lack any understanding of how a human can think or learn or recall by any brain mechanism, and when they try to sound like they have some knowledge of such things, we usually get nothing but vacuous hand-waving. There are some very successful types of computer programs that have misnomer biological-sounding names such as "neural nets," but such programs do not actually have characteristics matching those of the brain or any part of the brain. 

The main reason you will not ever be able to upload your mind into a computer is that such an idea is based on the assumption that your brain creates your mind and that your brain stores your memories: an assumption that is not correct.  You are not your brain, but a soul. But you need not worry a bit about the impossibility of uploading minds into computers. The very reality that makes such a thing impossible is a reality that makes such a thing unnecessary. You do not need to upload your mind into a computer, because your mind is a soul reality that will not die when your brain and body dies.  One of the main types of evidence for this are out-of-body experiences, with quite a few features beyond any credible explanation of neuroscientists (as discussed here). 

Saturday, February 8, 2025

Neuroscience News Articles Keeps Passing Off Memory-Irrelevant Research as Findings About Memory

Neuroscientists are getting nowhere in trying to find any neural basis for human memory. Nothing could be less surprising, considering that the human brain bears not the slightest resemblance to a device for permanently storing and instantly retrieving memories. Because humans manufacture devices that are capable of permanently storing and instantly retrieving newly acquired information such as visual information, we know the kind of things that such devices require. They include things such as this:

(1) Some component for writing transient visual or auditory signals into some information storage format capable of being permanently stored. 

(2) Some component capable of permanently storing such converted data once it has been captured. 

(3) Things such as indexes, addressing or sorting that would allow a particular piece of data to be found, given only a name. 

(4) Some component capable of reading the stored data once it was found. 

Nothing like this is found anywhere in the brain. No one has any credible theory of how something you see or hear could ever be converted into neural states or synapses states where it could be permanently stored. Nothing in the brain looks like any type of component for writing information. Nothing in the brain looks like any type of component for reading information. Nothing in the brain looks like a place where human memory information could be permanently stored.  The proteins in the brain have short average lifetimes of less than two weeks. 

So what do you if you are someone trying to keep alive the unfounded notion that the human brain is the storage place of human memories? One of the things that you can do is deceive and misinform. Part of the deception that goes on is when neuroscience studies unrelated to memory are passed off as studies telling us something about a neural basis for memory. 

An example is a story promoted today by the science news page of a major news provider, where we find too many untrue "science news" headlines. The story is from the Neuroscience News web site, a frequent source of bogus untrue headlines. The bogus headline is "Brain Cells Use Muscle-Like Signals to Strengthen Learning and Memory."

In this case you can tell the headline is misleading without even reading the paper being touted. You can simply look for the old trick in which we get a conversion from a headline asserting something as matter-of-fact to an article telling us it is really only a "maybe." 

Headline: "Brain Cells Use Muscle-Like Signals to Strengthen Learning and Memory.

Article Text: "New research led by the Lippincott-Schwartz Lab shows that a network of subcellular structures similar to those responsible for propagating molecular signals that make muscles contract are also responsible for transmitting signals in the brain that may facilitate learning and memory."

We then have the "sheds light" trick. It works like this: some type of low-level research having no clear significance is discussed, and the claim is made that this "sheds light" on blah blah blah, where blah blah blah is some dogma that neuroscientists like to teach, typically a groundless dogma having no good basis in fact, and often a groundless dogma ruled out by facts about the brain that have already been discovered. 

The trick is used when the article says, "It also sheds light on the molecular mechanisms underlying synaptic plasticity – the strengthening or weakening of neuronal connections that enables learning and memory."  There is no credible theory under which learning or memory can be explained by "the strengthening or weakening of neuronal connections." The phrase "synapse strengthening" is the vague hand-waving phrase neuroscientists use when asked how memories could be stored in brains.  The theory that memories are stored by synapse strengthening is untenable for the 30 reasons discussed here. Synapses are 50 times too unstable to explain memories that can reliably persist for 60 years or more. Synapses have been well-examined by electron microscopes, and no trace of any learned information can be found there, or anywhere else in the brain. 

We can see how "there's no there there" (at least to regard to memory) by looking at the scientific paper being promoted by the Neuroscience News article, the paper here. The paper has no substantive uses of either the word "learning" or the word "memory." The paper's only use of the words "learn" or "learning" is an unsupported claim that some area is "a key area for associative learning in the fly brain." The text of the paper makes no use of the word "memory" or "memories." 

So we have a paper that is making basically no claims about learning or memory being promoted by the Neuroscience News site as if it discovered some big important thing about the brain handling memory or learning.  It's a typical type-of-event in articles classified as "neuroscience news," where it seems you are just as likely to find a big untruth as you are to find an important fact. 

anatomy of a neuroscience news story

Today on Facebook I got a link to a PR article from the Vanderbilt Brain Institute, something trying to persuade me that this institute is making some progress in doing neuroscience research on memory. T
he article has the goofy title "Learning at every level, from stumbling babies to deep-brain stimulation." Deep-brain stimulation is not learning. The article completely fails to back up the idea that this institute is making any progress in uncovering a neural explanation for learning.  The most substantive-sounding claim it makes is the very  incorrect claim, "His research has shown that intermittent electrical stimulation of the nucleus basalis of Meynert, whose neurones are a source of acetylcholine, enhances working memory in rhesus monkeys." The claim is based on a paper that used a laughably small sample size of only two monkeys, which is a very bad joke from any standpoint of reliable experimental research. 15 or 20 subjects per study group is the bare minimum for any research study of this type to be taken seriously. 

Thursday, February 6, 2025

Newspaper Accounts of Memory Marvels (Part 2)

 The credibility of claims that memory recollections come from brains is inversely proportional to the speed and capacity and reliability at which things can be memorized and things can be recalled. There are numerous signal slowing factors in the brain, such as the relatively slow speed of dendrites, and the cumulative effect of synaptic delays in which signals have to travel over relatively slow chemical synapses (by far the most common type of synapse in the brain). As explained in my post here, such physical factors should cause brain signals to move at a typical speed very many times slower than the often cited figure of 100 meters per second: a sluggish "snail's pace" speed of only about a centimeter per second (about half an inch per second).  Ordinary everyday evidence of very fast thinking and instant recall is therefore evidence against claims that memory recall occurs because of brain activity, particularly because the brain is totally lacking in the things humans add to constructed objects to allow fast recall (things such as sorting and addressing and indexes). Chemical synapses in the brain do not even reliably transmit signals. Scientific papers say that each time a signal is transmitted across a chemical synapse, it is transmitted with a reliability of 50% or less.  (A paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower.")  The more evidence we have of very fast and very accurate and very capacious recall (what a computer expert might call high-speed high-throughput retrieval), the stronger is the evidence against the claim that memory recall occurs from brain activity. 

It is therefore very important to collect and study all cases of exceptional human memory performance. The more such cases we find, and the more dramatic such cases are, the stronger is the case against the claim that memory is a neural phenomenon. Or to put it another way, the credibility of claims that memory is a brain phenomenon is inversely proportional to the speed and reliability and capacity of the best cases of human mental performance.  The more cases that can be found of humans that seem to recall too quickly for a noisy address-free brain to ever do, or humans that seem to recall too well for a noisy, index-free, signal-mangling brain to ever do,  the stronger is the case that memory is not a neural phenomenon but instead a spiritual or psychic or metaphysical phenomenon.  In part 1 of this post, I gave many newspaper clips giving examples of such exceptional human memory performance. Let us now look at some more of such newspaper clips. 

Below is part of an 1886 newspaper that describes what seems like what is now called Highly Superior Autobiographical Memory (HSAM), also called hyperthymesia

hyperthymesia

The account can be read below:


The source of the account below is the W. D. Henkle January, 1871 article here, "Remarkable Cases of Memory," which documents the abilities of Daniel McCartney in very great detail, giving transcripts of interviews with him. I will have a post on this case in the next few months. 

On the same 1886 page shown above we can read the account below, which tells of a man with an extraordinary ability to remember architectural details. The case reminds you of the modern case of Stephen Wiltshire:

memory marvel

Later on the same page, we read of these memory marvels. The reference to "almost the whole of Horace, Virgil, Homer, Cicero and Livy" refers to a set of books with a totality of many thousands of pages. The Aeneid referred to is a book of 9883 lines. 
 
memory marvels

The reference to Porson is a reference to Richard Porson (1759 -1808). A web page says this about him:

"Their author [Porson] gives the impression of knowing every page of the Christian fathers [e.g. Augustine, Aquinas] as if they were indexed and capable of flashing up before him on a computer screen whenever needed. And in a manner of speaking they were. Anecdotes from several different sources attest to what we should nowadays call his photographic memory, and to that memory was committed not only classical and post-classical Greek and Latin literature but a wealth of English and some French literature as well, to which his own writings contain a host of often fleeting allusions."

Another web page says this about Richard Porson:

"It was here that his uncanny powers of memory came into full voice: one person heard him declaim an ode of Pindar in Greek, and then a whole act of Samuel Foote’s farce The Mayor of Garratt (1763), each without error. ...Porson could be presented briefly with a book, read a couple of pages from memory, and then do so backwards – almost without error."

We read in one of the pages cited above a reference to Giuseppe Gasparo Mezzofanti (17 September 1774 – 15 March 1849), who was famed for his ability to speak more than 30 different languages.  Of course, any such ability would require the most prodigious memory very far beyond that of the average person. 

A 1905 news article tells of a man who only developed amazing powers of memory after a severe brain injury (the man was named J. A. Bottle, and used a stage name of Datas):

better memory after brain injury

You can read the full story here:


This type of acquisition of previously absent mental powers after a traumatic injury (inexplicable under the idea that brains make minds) is sometimes called acquired savant syndrome. At the link here is an article giving another example: " At age ten, Orlando Serrell was struck on the left side of his head with a baseball; he is able to clearly remember the weather and details about his personal activities for every day since that accident (Hughes 2010, 149)." 

A 1913 newspaper article tells us of a case of a man with photographic memory:

photographic memory

You can read the account here:


The same article makes the statements below. We have a reference to "the whole of Tacitus," which means The Annals consisting of more than 500 pages each having about 400 words each, and also additional works consisting of hundreds of other pages. The reference to the Metaphysics of Aristotle refers to a book of more than 300 pages, each with about 400 words.  The Aeneid referred to has 9883 lines. The Iliad referred to has 15,693 lines. 

memory marvel

The account below appeared in 1905:

memory prodigies

You can read the account here:

https://chroniclingamerica.loc.gov/lccn/sn94056446/1905-06-02/ed-1/seq-4/

Below is an obituary of the prodigy called Blind Tom. The reported ability to replay any composition after having heard it only once is an ability known to exist in today's world, having been demonstrated repeatedly by Derek Paravicini

ability to replay a song after hearing it just once

You can read the full account below:

https://chroniclingamerica.loc.gov/lccn/sn86090233/1908-07-09/ed-1/seq-8/