Monday, March 3, 2025

Newspaper Accounts of Memory Marvels (Part 3)

The credibility of claims that memory recollections come from brains is inversely proportional to the speed and capacity and reliability at which things can be recalled. There are numerous signal slowing factors in the brain, such as the relatively slow speed of dendrites, and the cumulative effect of synaptic delays in which signals have to travel over relatively slow chemical synapses (by far the most common type of synapse in the brain). As explained in my post here, such physical factors should cause brain signals to move at a typical speed very many times slower than the often cited figure of 100 meters per second: a sluggish "snail's pace" speed of only about a centimeter per second (about half an inch per second).  Ordinary everyday evidence of very fast thinking and instant recall is therefore evidence against claims that memory recall occurs because of brain activity, particularly because the brain is totally lacking in the things humans add to constructed objects to allow fast recall (things such as sorting and addressing and indexes). Chemical synapses in the brain do not even reliably transmit signals. Scientific papers say that each time a signal is transmitted across a chemical synapse, it is transmitted with a reliability of 50% or less.  (A paper states, "Several recent studies have documented the unreliability of central nervous system synapses: typically, a postsynaptic response is produced less than half of the time when a presynaptic nerve impulse arrives at a synapse." Another scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower.")  The more evidence we have of very fast and very accurate and very capacious recall (what a computer expert might call high-speed high-throughput retrieval), the stronger is the evidence against the claim that memory recall occurs from brain activity. 

It is therefore very important to collect and study all cases of exceptional human memory performance. The more such cases we find, and the more dramatic such cases are, the stronger is the case against the claim that memory is a neural phenomenon. Or to put it another way, the credibility of claims that memory is a brain phenomenon is inversely proportional to the speed and reliability of the best cases of human mental performance.  The more cases that can be found of humans that seem to recall too quickly for a noisy address-free brain to do ever do, or humans that seem to recall too well for a noisy, index-free, signal-mangling brain to ever do,  the stronger is the case that memory is not a neural phenomenon but instead a spiritual or psychic or metaphysical phenomenon.  In part 1  and part 2 of this series, I gave many newspaper clips giving examples of such exceptional human memory performance. Let us now look at some more of such newspaper clips. 

In the newspaper account below from 1890, we seem to have an account of what is now called Highly Superior Autobiographical Memory, something that reportedly dozens of people around the world have.  We read this:

"Prof. Henkle makes mention of a remarkable character whom he  met at Salem, Mass., in I868, Daniel McCartney by name. McCartney was 51 years of age at that time, but proved to the satisfaction of Mr.  Henkle that he could remember where he had been, the state of the weather, etc., for each day and hour since he was 9 years old; dates covering a period of forty-two years! These remarkable feats were proved and verified by weather records and newspaper files kept in the city, and of the hundreds of tests resorted to to try his powers be never failed of proving himself a wonder of wonders in a single instance."

The account can be read here:

https://cdnc.ucr.edu/?a=d&d=PWA18901025.2.10&srpos=1&e=-------en--20--1--txt-txTI-memory+prodigy-------

I have found the original source for this claim, a paper that documents the case in the greatest detail.  It is thpaper "Remarkable Cases of Memory" by W.D. Henkle in The Journal of Speculative Philosophy, Vol. 5, No. 1 (January, 1871). Henkle gives transcripts of several interviews he had with Daniel McCartney, and documents the most extraordinary ability of him to do "in his head" very quickly very hard math problems, as well as his ability to correctly tell the day of the week and weather of random dates long ago, and what he what was doing on such dates, and whether any important news event occurred on such dates. Henkle verified the correctness of the recollections of the weather and what McCartney was doing by asking McCartney on different days about the same dates, and checking whether the answers given were consistent. 

In the newspaper story here, we read of a 90-day scripture memorization contest. We read this about the winner:

"On the day of the award it was found that among the older competitors the winner was Miss Leste May Williams, a young woman 16 years of age. With these ninety days, during which she had an attack of measles, she committed to memory and recited to the committee 12,236 verses of Scripture, covering the entire New Testament ...and including liberal selections from Genesis, Psalms, Ecclesiastes and other parts of the Old Testament."

Aitken and JB  performed similar feats when they memorized epic poems of about 10,000 lines, as did George Vogan de Arrezo, who memorized the entire text of Virgil's Aeneid (consisting of 9,896 lines. The New Testament has about 180,000 words, so the feat of Leste May Williams would seem to be far more impressive than the memorization of Virgil's Aeneid, which has only 63,719 words.  At the link here, the claim is made that "Indian youths have more than once repeated the whole New Testament," a feat matching the feat attributed to Leste May Williams in the quote above. 

Below is a description in a newspaper account of memory marvels of ancient times:

memory marvels

You can read the account here:

https://chroniclingamerica.loc.gov/lccn/sn82016339/1904-01-15/ed-1/seq-7/

An 1895 newspaper account tells us this:

exceptional memory

The same article tells us this about a man who must have known the meaning of at least a million words in many different languages:


You can read the full account below:

https://chroniclingamerica.loc.gov/lccn/sn82015104/1895-04-18/ed-1/seq-6/

Below is a newspaper account of a young girl with remarkable powers of memory:

"Miss Gussie Cottlow, of Chicago, gave an entertainment at the Presbyterian Church Tuesday evening, under the
auspices of the Y. P. S. C. E. She is called
the child pianist and is only twelve years
old. She played the  most difficult pieces from the great composers without a note before her, thus displaying in addition to
her wonderful execution a power of memorization that is in itself a marvel."

You can read the account here:

https://chroniclingamerica.loc.gov/lccn/sn82015679/1891-05-31/ed-1/seq-14/#

The 1948 newspaper story below discusses a 10-year-old boy (Pierino Gamba) who had apparently memorized dozens of long orchestral works, well enough to conduct them from memory. We are told he knew "33 complete works by memory."

musical prodigy

You can read the account of Gamba here:

https://chroniclingamerica.loc.gov/lccn/sn86075258/1948-09-22/ed-1/seq-6/

In the 1947 newspaper account here, we have another mention of Pierino Gamba. We see the young boy conducting an orchestra, and we see no musical score in front of him:

boy musical prodigy

 It is known that the great Italian conductor Arturo Toscanini had a similar ability. According to the 1920 newspaper article here, he had so well-memorized 150 opera scores that he "never even glances at a score when conducting." During his later conducting years Toscanini's eyesight was too bad for him to read a musical score in front of him. He was able to continue conducting many symphonies and operas, because he had memorized their musical scores. The Encyclopedia Britannica article on Toscanini says, "His phenomenal memory stood him in good stead when, suffering from poor eyesight, he was obliged always to conduct from memory." At the site here, we read this about Toscanini: "It is believed that he conducted 117 operas and 480 concert pieces by memory, both during rehearsals and concerts." An average opera is hours in length. The newspaper account here claims Toscanini memorized one opera score in a single night. 

According to the page here, Pericles Diamandi was able to memorize 50 random digits in 7 minutes, 100 random digits in 25 minutes, and 200 random digits in 2 hours and 15 minutes. On that page, we are told Diamandi repeated a sequence of 200 random digits without any error. Below are 200 random digits. I don't think 1 in 100 people could memorize a sequence that long, even if he had all day:

2 0 9 6 9 1 9 1 8 5 2 2 3 3 1 8 4 7 4 8 5 2 5 7 5 4 9 2 5 7 8 7 5 4 2 7 4 5 1 4 8 6 4 9 3 1 5 1 8 7 0 5 1 9 2 0 9 5 8 9 4 8 4 1 1 9 2 2 5 7 6 2 7 2 5 1 8 1 0 7 9 8 4 1 9 1 3 8 3 8 2 1 3 5 4 5 4 9 0 2 9 5 7 0 1 2 4 9 4 7 3 5 4 1 2 9 1 9 1 4 5 2 0 3 2 0 1 8 5 5 0 3 3 5 7 7 4 2 0 3 7 1 3 9 1 0 8 8 2 1 8 9 0 8 7 0 0 8 5 0 3 5 2 7 6 5 9 3 5 4 3 4 1 2 9 8 6 5 6 4 4 0 3 0 6 2 2 0 8 0 0 6 4 7 9 8 5 9 1 3

The described feat above was far surpassed by the digits memorizations feats mentioned in my post here, discussing results from the World Memory Championships. 

According to the page here, a young man named Terry working at Harvard had an exceptional memory:

memory marvel

The account says Terry could recognize 10,000 faces. This ability is twice as good face recognition as the average person, since according to an article in the journal Science, the average person can recognize about 5,000 faces. The paper here tested 25 random subjects, and estimated that the face recognition per person varied between 1000 and 10,000, the latter matching the face recognition ability attributed above to Terry. 

In the 1955 newspaper account here, we read this about someone's remarkable memorization ability:

Back then papers typically had multiple columns and small print, meaning that memorizing 40 pages of a newspaper would be a stunning feat. 

In the newspaper account here, we read of a 4-year-old memory marvel named Gertie Cochran:

"Little Gertie Cochran, aged 4 years and 7 months, who has more facts and figures in her head than a college professor would contain will come to Lincoln ...Little Miss Cochran will give an exhibition from the stage of her marvelous powers of memory. Her mental store has been acquired from remembering what she has heard others repeat, as she has never been to school a day, nor does she know her letters. Her wonderful powers of readily appropriating everything that she hears were discovered shortly after they began to develop themselves when she was less than a year old. She began to talk at 7 months of age and rapidly acquired a vocabulary which attracted the attention of all who happened to come within the circle of neighbors known to her bumble home in Mount Vernon, ...The little one can recite the facts contained in the Bible from beginning to end, which, although a wonderful mental feat, is only a drop in the bucket to the oceans of mathematical statistics which she has at her tongue's end and which she recites as soon as asked without a minute's hesitation. These figures she has committed to memory by having them repeated to her, and in no case, no matter how long the string of figures, does she require that they be repeated more than twice before she has them fixed in her little head, never to be forgotten for so much ss a second when to repeat them."

Searching for Gertie Cochran's name on the Chronicling America site, I get many matches. On the page here we read this: "She can tell without a moment's hesitation the population of every city in the world, of over 100,000 inhabitants, and can answer most any other question that can be answered. "  The newspaper account here gives this description of Gertie's abilities:

little girl memory marvel

The newspaper article here also mentions Gertie's abilities, hailing her as the girl who "never forgets anything she sees or hears":

girl who remembered everything

Thursday, February 27, 2025

Programming Gone Astray: Iteration Inanity of the Neuroscientists' Distortion Loops

 Quanta Magazine is a widely-read online magazine with slick graphics. On topics of science the magazine again and again is guilty of the most glaring failures. Quanta Magazine often assigns its online articles about great biology mysteries (involving riddles a thousand miles over the heads of PhDs) to writers who lack even a bachelor's degree in biology. Often it will assign such articles to be written by people identified as "writing interns."  The articles at Quanta Magazine often contain misleading prose, groundless boasts or the most glaring falsehoods. I discuss some examples of such poor journalism in my posts here and here and here and here

The writers at Quanta Magazine are very often guilty of bootlicking, a word meaning excessive deference to an authority or a superior.  The latest example of bootlicking at the magazine is an article entitled "How 'Event Scripts’ Structure Our Personal Memories." The subtitle makes this very untrue claim: "By screening films in a brain scanner, neuroscientists discovered a rich library of neural scripts — from a trip through an airport to a marriage proposal — that form scaffolds for memories of our experience."  The claim has no basis in fact. The article follows it with this equally untrue claim: " 'Event scripts' are distinct neural fingerprints that encode repeated sequences of events, such those that unfold during a trip through the airport."  No such things have been found. 

The article begins telling us tall tales about neuroscientist Christopher Baldassano, incorrectly stating this: "Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. " No such thing happened. The article is referring to a very low-quality paper co-authored by Baldassano, one entitled "Representation of Real-World Event Schemas during Narrative Perception." 

The study had the following flaws:

(1) The study group sizes in this task-based fMRI study were skimpy, consisting of only 15 or 16 subjects per study group. Referring to study group sizes twice as large, an article on neursosciencenews.com states this: "A new analysis reveals that task-based fMRI experiments involving typical sample sizes of about 30 participants are only modestly replicable. This means that independent efforts to repeat the experiments are as likely to challenge as to confirm the original results."

(2) No blinding protocol was used. 

(3) The paper was not preregistered, and did not test any hypothesis formulated before gathering data, using a method specified before gathering data. 

(4) The paper is a bad example of "keep torturing the data until it confesses" methodology.  The paper has graphs that are not based on simple brain scans, but are instead based on brain scan data after it has been manipulated through the most convoluted pathway of arbitrary contortions. 

Below from the paper is a discussion of only a small fraction of the "keep torturing the data until it confesses" nonsense that was occurring:

"For each story, four regressors were created to model the response to the four schematic events, along with an additional nuisance regressor to model the initial countdown video. These were created by taking the blocks of time corresponding to these five segments and then convolving with the HRF from AFNI (Cox, 1996). A separate linear regression was performed to fit the average response of each group (in the 100-dimensional SRM space) using the regressors, resulting in a 100-dimensional pattern of coefficients for each event of each story in each group. For every pair of stories, the pattern vectors for each of their corresponding events were correlated across groups (event 1 from Group 1 with event 1 from Group 2, event 2 from Group 1 with event 2 from Group 2, etc., as shown in Fig. 2a) and the four resulting correlations were averaged. This yielded a 16 X 16 matrix of across-group story event similarity. To ensure robustness, the whole process was repeated for 10 random splits of the 31 subjects, and the resulting similarity matrices were averaged across splits...To explore the dimensionality of the schematic patterns, we reran the analysis after preprocessing the data with a range of different SRM dimensions, from 2 to 100. The resulting curve of z values versus dimensionality for each region was then smoothed with the LOWESS (Locally Weighted Scatterplot Smoothing) algorithm implemented in the statsmodels python package (using the default parameters). To generate the searchlight map, a z value was computed for each vertex as the average of the z values from all searchlights that included that vertex. The map of z values was then converted into map of q values using the same false discovery rate correction that is used in AFNI (Cox, 1996)....The resampled data (time courses on the left and right hemispheres, and in the subcortical volume) were then read by a custom python script, which implemented the following preprocessing steps: removal of nuisance regressors (the 6 degrees of freedom motion correction estimates, and low-order Legendre drift polynomials up to order [1  duration/150] as in Analysis of Functional NeuroImages [AFNI]) (Cox, 1996), z scoring each run to have zero mean and SD of 1, and dividing the runs into the portions corresponding to each stimulus. All subsequent analyses, described below, were performed using custom python scripts and the Brain Imaging Analysis Kit (http://brainiak. org/)."

This is only a small fraction of the contortion inanity that was going on. The paper has many other paragraphs sounding like the one just quoted.  To see the ugliness of the manipulation muddle that was occurring, you must look at the programming code. The authors have made their code public, and you can see it using the link here.  Looking at their programming scripts, we see an appalling example of arbitrary, unjustifiable  algorithms, the most convoluted spaghetti code.  The brain scan data is being passed through many types of poorly documented programming loops that are doing God-only-knows-what kind of mystifying manipulation. Below is only a tiny part of the bizarre manipulations that were going on.


You might call this "iteration inanity." The output is some kind of utterly artificial "witches' brew" that cannot be called the original data gathered or anything like the original data gathered. We should not have any confidence in any of the main graphs in the paper, because they are all produced by passing brain scan data through spaghetti code convolution contortions similar to the one shown above.  This is a severe example of "keep torturing the data until it confesses," what we might call a Spanish Inquisition level of torturing.  We have some utterly artificial transmogrification mess that is the result of obscure arbitrary  programming manipulations, some gobbledygook rigmarole. The authors have not found any "event scripts" or patterns in the brain.  The only thing they have found is something they have created themselves by spaghetti-code programming that distorts and manipulates the original brain scan data. 

spaghetti code neuroscience

keep torturing data until it confesses
Was this how the mess arose?

When good programmers are writing straightforward programming code and they know what they are doing, they tend to use intelligible variable names such as ThisYearsAccruedInterest or TotalAccruedInterest.  Bad programmers use unintelligible variable names such as "d" or "ev" or "cc" or "np," like in the example above, without any comments documenting the variable names, often because they don't even understand what the variables correspond to, and cannot give an intelligible name corresponding to the variable.  As a general rule, we should tend to distrust any scientific programming that uses undocumented variable names of one or two letters such as "d" or "ev" or "cc" or "np," because the use of such cryptic variable names is a strong reason for suspecting that incompetent programmers are at work. 

The Quanta Magazine article then has a link to another paper by Baldassano and others, one entitled "Top-down attention shifts behavioral and neural event boundaries in narratives with overlapping event script."  The paper relies on the same kind of iterative inanity as the previously mentioned paper. We see the same type of loony-looking loops that make all kind of weird, arbitrary transfigurations and contortions and manipulations of the original data, with only the scarcest comments in the source code to explain what is being done. It's another big heap of spaghetti code nonsense doing God-only-knows-what to the original data.  You can see the manipulation mess by looking at the Python files here

Nothing real about the brain is being revealed here. If any "scripts" or patterns were discovered, the authors were merely discovering the outputs of their own data-manipulating programming loops. To claim the output of such distortion loops as being something in the brain is as misleading as picking up 100 stones from the seashore,  forming them into a sculpture of a cat, and then claiming that the waves produced a sculpture of a cat. 

It is rather obvious that our Quanta Magazine writer has not learned how to distinguish good neuroscience research from very bad neuroscience research. That writer states this:

"In 2004, the neuroscientist Uri Hasson and his colleagues at the Weizmann Institute of Science in Israel started carving a path through the thicket of voxels. In one of their studies, five people, while lying in a brain scanner, watched 30 minutes of The Good, the Bad and the Ugly (1966), a spaghetti western starring Clint Eastwood. Comparing the data from the five participants, the researchers noted when and where brain activity surged or waned in unison."

Why would anyone even bother to mention a research study using so obviously too-small a study group size of only five subjects? The writer then gives us one more bum steer. We are given false claims about a study by a neuroscientist named Chen:

"In 2012, Chen joined Hasson’s lab, then at Princeton, and extended the approach to memory. She had people watch the first episode of the television show Sherlock (2010), featuring Benedict Cumberbatch as a modern take on the legendary detective. Then the study participants talked through their memory of it, while still lying in the scanner. The experiment worked. Chen and her colleagues were able to match brain activity recorded during participants’ recollections to specific scenes around 60 seconds long — for example, when Sherlock meets Watson."

The claim is false, because the study was some very low-quality work. The link given in the Quanta Magazine article is to the paper "Shared memories reveal shared structure in neural activity across individuals" which you can read here. The study had the following defects:

  • The study used too-small study groups such as one with only 8 subjects and another with only 9 subjects. The authors confess, "No statistical methods were used to pre-determine sample sizes but our sample sizes are similar to those reported in previous publications." It is well-known that neuroscience experiments typically use way too few subjects for results with good statistical power, so you do not have a good excuse for failing to do a sample size calculation (to determine a good study group size) by appealing to other experimenters using study group sizes like yours. 
  • The study failed to use a blinding protocol.  The authors confess, "Data collection and analysis were not performed blind to the conditions of the experiments." 
  • Instead of simply using the original brain scan data, the authors performed very many obscure and arbitrary convolutions, contortions and distortions of the original data. 
A very long part of the paper describes all the weird data manipulations and convoluted contortions that were occurring. Here is only a very small fraction of that part:

"We performed a resampling analysis wherein the individual participant correlation values for recall-recall and movie-recall were randomly swapped between conditions to produce two surrogate groups of 17 members each, i.e., each surrogate group contained one value from each of the 17 original participants, but the values were randomly selected to be from the recall-recall comparison or from the between participant movie-recall comparison. These two surrogate groups were compared using a t-test, and the procedure was repeated 100,000 times to produce a null distribution of t values. The veridical t-value was compared to the null distribution to produce a p-value for every voxel. The test was performed for every voxel that showed either significant recall-recall similarity (Fig. 3B) or significant between-participant movie-recall similarity (Fig. 3B), corrected for multiple comparisons across the entire brain using an FDR threshold of q = 0.05 (see Methods: Pattern similarity analyses); voxels p < 0.05 (one-tailed) are plotted on the brain (Fig. 4A)"

The authors have not provided a link to their source code. Based on the descriptions of their methods, we may assume that they were using the same kind of unjustifiable distortion loops that go on in the papers of Baldassano.  People with the worst programming are the least likely to make the code public. The Chen paper "Shared memories reveal shared structure in neural activity across individuals" is low-quality work that fails to follow good standards of research. No robust evidence has been provided of "shared structure in neural activity" when the same memories are experienced.  The authors seem to have merely discovered something they created themselves through their strange contortions and manipulations of data. 

The Quanta Magazine article is a very bad example of bootlicking. We have all kinds of claims that scientists accomplished something, when most of these things were not actually done, because the methods used were so poor.  Instead of such fanboy swooning, the author should have put the methods of the discussed neuroscientists under stringent critical scrutiny, which would have mainly revealed the defective methods being used. 

Part of the problem with studies like this is that we do not get any chronological account of the different attempts at fooling around with the brain scan data that was produced. We only get a result of some final algorithmic result that the authors had, after a long process of "keep torturing the data until it confesses in the weakest whisper." We may presume that what often goes on is something rather like this:

Programmer: Well, that ends my 18th programming attempt to squeeze some "patterns" out of this brain scan data, and I still have nothing. I'm getting nowhere. It's like trying to squeeze blood from a stone. 
Scientist: Keep trying! Be more creative!  Add, pad; slice, dice;  merge, purge; mix, fix; ruffle, shuffle; sift, shift; combine, align; inflate, conflate; shrink, link and sync; crop, drop, and swap; bend, blend, rend and mend; ditch, hitch, stitch and switch. Try every kind of programming loop you can think of, to try to gin up something from this data that we can call a pattern, or something we can claim as a possible representation. 
Programmer: Do I have to save all the earlier versions of my code that failed?
Scientist: Hell no. We only describe the FINAL version of the code in our paper. 

Monday, February 24, 2025

No, They Did Not Find Love in the Brain

On the day I am writing this post, which I have auto-scheduled for publication at a later date, there are two stories in the science news trying to suggest that scientists made some progress in finding a neural basis for love. One article marks the death of neuroscientist Helen Fisher.  We have a headline of "Dr Helen Fisher, MRI maven who showed just how love works, dies at 79," and a subtitle of "It's all about a chemistry."  The article that follows discusses no robust evidence that this researcher found any such thing as a neural basis for love.

The article has a link to some research by Fisher. It's a link to a page on her web site that then has a link to her 2006 study "Romantic Love: An fMRI Study of a Neural Mechanism for Mate Choice."  It's a very poorly-designed study using a too-small study group of only 17 subjects, and no control subjects. The 17 subjects were people who described themselves as being very much in love. They were shown pictures of the person they loved, along with neutral pictures. We have a claim that some areas of the brain showed more activation when the pictures of the loved ones were shown.  We have no mention of any blinding protocol, no mention of any control subjects, and no mention of any sample size calculation to try to determine whether the study group size was adequate.  

The lack of a blinding protocol and the lack of any control subjects are enough to disqualify this study as being any robust evidence for a neural basis for love. An intelligent way to design a study like this would be to have a number of control subjects equal to the number of people who claimed to be in love, the control subjects being people who were not in love. Then "blinded" analysts examining only the brain scans (without knowing whether a particular brain scan belonged to someone who claimed to be in love)  could examine the scans, and attempt to predict whether a particular set of scans belonged to a person in love who was seeing a picture of his loved one. A high predictive success (maybe 90%) might suggest some neural basis for love, although it would be necessary to replicate such a finding. 

Nothing like that is done in the study. Instead, it simply had analysts checking all parts of the brain, looking for some area that could be claimed as an area of "activation." A claim to have found a few such tiny regions is just what we would expect even if the brain has no involvement in love.  Similarly, if you scanned the liver of 17 subjects while they were looking at photos of their loved ones and photos of strangers, and you had freedom to check any of 1000 tiny liver areas, you would no doubt find random variations that would allow you to claim that some area of the liver is involved in love.  What is going on Fisher's study is mere noise mining. Someone is looking at random, noisy data, and trying to find some evidence of something, which the data is not actually presenting. 

In an fMRI study you know that a decent "superior activation" has been found when the paper uses the phrase "percent signal change" to indicate a signal change of more than 2%.  The Fisher paper has no mention of any percent signal change. 

An equally weak piece of research is Fisher's 2012 paper "NEURAL CORRELATES OF MARITAL SATISFACTION AND WELL-BEING: REWARD, EMPATHY, AND AFFECT," presented on Fisher's web site as an example of her work. It has exactly the same flaws as the 2006 paper discussed above: a too-small study group size of only 17 subjects, a failure to do a sample-size calculation (which would have revealed the inadequate statistical power), a lack of any control subjects, a lack of any blinding protocol, and a failure to report any impressive result as a percent signal change of even 1%. No effect size is reported. 

The same flaws are also found in Fisher's 2011 weak science paper "Neural correlates of long-term intense romantic love," another paper presented on her web site as if it were an example of her best work.  It has a too-small study group size of only 17 subjects, a failure to do a sample-size calculation (which would have revealed the inadequate statistical power), a lack of any control subjects, a lack of any blinding protocol, and a failure to report any impressive result as a percent signal change of even 1%. No effect size is reported. 

It's very easy to explain the kind of tiny blips reported in these studies, without any belief that romantic love has a neural basis. In the type of studies done, people were shown photographs that might either be of their loved one, or a stranger. Seeing a photograph of a loved one, a typical subject might have had a smile of recognition. But it well known that even very small muscle movements can cause fMRI blips. 

In the article about Fisher's death, we have this strange quote about the 2006 paper discussed above: " 'I distrust about 95 percent of the MRI literature and I would give this study an 'A'; it really moves the ball in terms of understanding infatuation love,'  Dr Hans Breiter, director of the Motivation and Emotion Neuroscience Collaboration at Massachusetts General Hospital, told The New York Times after the publication."  No, all of the papers discussed have the same kind of flaws that should cause you to distrust 95% of the fMRI literature. 

Two of the three papers by Fisher used the very misleading "lying with colors" technique in which tiny differences in brain activation less than 1% are misleadingly depicted in bright red or bright yellow, giving the incorrect impression of a major difference when there was no such difference. 

neuroscientist deception

Neuroscientists deceive us with such "lying with colors" diagrams, which visually create the impression that very tiny differences in activity  such as 1 part in 200 are very big differences. Another way in which neuroscientists deceive us in studies such as these are by making statements that there was "activation" in some particular region of the brain. Such language gives readers the impression that there was some turning on effect in which an inactive region of the brain started to become active.  The truth is that all regions of the brain are electrically active at all times. All neurons fire at rates averaging about 1 time per second or more. 

So imagine you have found by an fMRI reading a case in which some tiny region of the brain starts to show maybe 1 part in 200 greater activity than other regions. Is it correct to call that "activation"? No, it is not. Activation means to start become active. When all regions of the brain are continually active, and are not varying in activity by more than about 1 part in 200, it is deceptive to claim that "activation" occurred when some little difference such as 1 part in 200 was first seen. This deception occurs abundantly in a large fraction or most brain scan papers. 

On the same day the death of Fisher was announced, we had a link to a news story promoting the paper "Six types of loves differentially recruit reward and social cognition brain areas."  We read of an fMRI study about love and the brain, one using a much larger study group size, one of 55 subjects. A study group size like that may make you realize how inadequate were the study group sizes (only 17) of Helen Fisher's papers discussed above. If scientists felt a need to use 56 subjects for a brain scan study involving love, there was presumably some reason why they thought a study group size such as 17 was way too small. 

Unfortunately the new paper uses a protocol that is nonsensical. It involves scanning people's brains while they were read passages such as these (there were dozens of different paragraphs like these):
  • "You are having a candle lit dinner with your partner. You look into their eyes over the table, and you share a mutual understanding without words. You love your partner."
  • "Your child runs to you joyful on a sunny meadow. You smile together and the sunrays flicker on their face. You feel love for your child."
  • "You need help moving house and you call your friend. They promise to of course come to help out, and soon you are lifting cardboard boxes together in a van. In the middle of the ordinary situation you feel love for your friend."
The authors have assumed that hearing such phrases would cause subjects being brain-scanned to feel different types of love. But there is no reason to think that such phrases would do that. Did you feel any love upon reading the sentences above? Almost certainly you did not. 

The paper provides no evidence that the differences in brain activity recorded were greater than 1% or even half of that, and there is no reason to think that they have picked up anything other than random variations in brain activity, variations having nothing to do with love. The paper makes no mention of any percent signal change detected, which is typically what would happen when the detected percent signal changes are very low and unimpressive.  No robust evidence has been provided to show a brain basis for love. 

decline of science news

A recent fMRI study seems like a rare example of a well-done brain scan experimental study. The study "Resting-State Functional Connectivity of the Amygdala in Autism: A Preregistered Large-Scale Study" involved brain scans of more than 400 people, including more than 200 with autism. Rather than the usual "fishing expedition" allowing investigators to check any of hundreds of places in the brain, like the Fisher studies discussed above,  the study was a pre-registered study that committed itself to examining only differences in one tiny region of the brain, the amygdala region. The study reports finding "no reliable evidence for atypical functional connectivity of the amygdala in autism, contrary to leading hypotheses."  When we look at the pre-registration in a study like this and the use of hundreds of brain scans, the Helen Fisher papers discussed above seem all the more like junk work. 

Thursday, February 20, 2025

Brain Tumors Seem to Have Relatively Little Effect on Cognitive Function

One way to test the "brains make minds" hypothesis is to examine the effect of brain tumors on cognitive performance. Under the hypothesis that the brain makes the mind and the brain stores memories, we should expect brain tumors to have a huge effect on cognitive performance. That does not seem to be the case at all. 

The most common test of cognitive performance used by doctors is a test called the MMSE, which stands for the Mini Mental State Examination test. The link here gives you some of the questions used on the test. An example question is that you are asked to count backwards from 100, going back 7 in each steps (for example, 93, 86, 79,  72, and 65). The MMSE test has a maximum score of 30. Adults with normal cognitive functions will tend to score about 29 on the test. The link here says that a score of 24 or higher is considered "normal."

Another widely used test of mental function is the Raven's Colored Progressive Colored Matrices test, called the RCPM. The test has a maximum score of 36. According to the paper here, an average score for an elderly person is about 26. 

The study here ("THE EFFICACY OF RAVEN’S COLORED PROGRESSIVE MATRICES FOR PATIENTS WITH BRAIN TUMOR") gives MMSE and RCPM scores for 43 patients, before and after surgery for a brain tumor.  We read about some remarkable results: "Median pre- and post-operative MMSE scores were 29 points (14– 30) and 29 points (21–30), respectively. Median pre- and post-operative RCPM scores were 33 points (25–36) and 35 points (18–36), respectively."

Let us consider how much this result contradicts the "brains make minds" dogma. The results for the Mini Mental State Examination (MMSE) test were almost perfect for the 43 subjects with brain tumors. They scored a median of 29, only one point less than a perfect score of 30.  For the Raven's Colored Progressive Matrices test, the median score of the brain tumor patients after brain surgery was an almost perfect score of 35, only one point less than the maximum of 36. Moreover, after the brain surgery the score for these patients improved from 33 to 35.  Nothing is done in a brain tumor surgery to cognitively fix a brain. The sole purpose of the surgery is to remove the cancerous tumor, hopefully in some way that will prevent the tumor from reappearing. Very often the amount of brain tissue removed is greater than the amount that looks grossly cancerous. Under the hypothesis that the brain makes the mind, we should not at all expect patients to be getting better scores on mental tests after they had surgery to remove a brain tumor. 

Another interesting study is the study "Cognitive reserve and individual differences in brain tumour patients." The study involved about 700 brain tumor patients who were cognitively tested. The patients included 143 low-grade glioma patients, and 181 high-grade glioma patients. High-grade glioma patients are those with really bad brain tumors.  The study has the limitation that it fails to give us the average or median cognitive test scores that it analyzed.  All that we are given is some analysis expressed by using correlation coefficients. A correlation coefficient is a number between 0 and 1 telling us about how much one thing is correlated with another. A correlation of 0 indicates no causal relation, and a correlation of 1 indicates a perfect causal relation. 

Table 4 of the study indicates there was virtually no correlation between the volume of the tumor and performance on the Raven's Colored Progressive Matrices test, a negligible correlation of only −0.0345. The same table indicates there was virtually no correlation between performance on the Raven's Colored Progressive Matrices test and whether the tumor was a high-grade glioma, a negligible correlation of −0.0310. We see a much higher correlation of .349 for "fronto-parietal" tumors, but Table 1 says there were only six patients with "fronto-parietal" tumors, so it's too small a sample size of "fronto-parietal tumor" patients to be very significant evidence. 

Table 3 of the paper here ("Pre-Surgery Cognitive Performance and Voxel-Based Lesion-Symptom Mapping in Patients with Left High-Grade Glioma") gives the results of cognitive tests on 85 people with high-grade glioma in the left hemisphere. Under the dogma that the brain makes the mind, we would expect most of these people with severe brain tumors to have performed poorly on such tests. But the table does not show that. Instead we see that on 17 out of 18 tests most of the patients did not perform in a "pathological" manner.  Only on a "verb naming" task did most of the subjects perform poorly, with 61% performing poorly. On 17 out 18 tests an average of only about 25% of the subjects performed poorly. 

The paper here ("Quality of life in patients with stable disease after
surgery, radiotherapy, and chemotherapy for
malignant brain tumour") analyzed cognitive data on 57 brain tumor patients with malignant brain tumors, a particularly severe type. We read this: "Separate Mann Whitney tests did not show any differences between the tumour and control groups in terms of score for FLIC (U=476.5, p=0.031), ADL (U=674, p=0.89), STAI1 (U=502, p=0.059), STAI2 (U=641, p=0.65), SRDS (U=618, p=0.49), Raven’s coloured progressive matrices (U=533, p=0.11), attentional matrices (U=624, p=0.53), trail making test part A (U=673.5, p=0.91) and B (U=624, p=0.53), or story recall scores (U=637, p=0.62)."  The average score on the Raven’s Colored Progressive Matrices test for the brain tumor patients was about 28 (27.86).  The patients with severe malignant brain tumors scored higher than control subjects on this test, who got an average score of only 26.0.  According to the paper here, an average score for an elderly person is about 26. So the people with the malignant brain tumors (a particularly severe type) scored higher on the cognitive test than normal people of their age. 

The paper here ("Evaluation of mini-mental status examination score after gamma knife radiosurgery as the first radiation treatment for brain metastases") gave the MMSE cognitive test on 119 patients before and after treatment for brain surgery. We read, "In 16 of 37 patients (43.2 %) with pre-GKS MMSE scores ≤27, the MMSE scores improved by ≥3 points, whereas 15 of all patients (19.7 %) experienced deteriorations of ≥3 points." It sounds like the number of increases in cognitive scores was as high as the number of decreases.

The study here ("Episodic Memory Impairments in Primary Brain Tumor Patients") studied problems with memory in 158 people having brain tumors. Using a method that sounds as if it was trying to report as many people as possible as having memory problems, the study claims that only 42% of those with brain tumors had any memory problems. It reports that "No correlations between specific tumor locations and types of episodic memory impairment were found, except for the association of encoding deficits with corpus callosum infiltration (Logistic regression: OR 4.36, β = 1.68, 95% IC 1.37–12.58, p = .02)."  Since the people with brain tumors are typically old people, and maybe something like 40% of old report report some type of memory problem, we have no clear evidence that brain tumors are causing memory problems.  This study follows a frustrating methodology in which it refuses to report the degree of dysfunction in any of the people reported as having memory problems. We have a claim about what percentage of brain tumor patients have some kind of memory problem, without any details on how bad such problems were.   This is just what we would expect if only tiny performance differences were found. 

We get a "memory problem criteria" description that sounds like it is trying to place as many people as possible in a category of "people with memory problems":

"Each of the nine scores recorded (number of word recalled at immediate recall), free recalls (1, 2, 3, delayed) and cued recalls (1, 2, 3, delayed) were considered abnormal when it corresponded to a performance equal to or under the fifth percentile of the healthy controls normative data (van der Linden et al., 2004). An encoding deficit was diagnosed when the immediate recall was abnormal (the assumption being that the items were not present in the working memory immediately after they have been red, and so not encoded). A failure in free recalls corresponded to at least 2/4 abnormal scores and a failure in cued recalls corresponded to at least 2/4 abnormal scores (the test is composed of four free and four cued recalls). A storage deficit was diagnosed in the case of a failure in free recalls associated with a failure in cued recalls. This means that the cues didn’t improve the number of items recalled, assuming that the items was not stored. A retrieval deficit was diagnosed in the case of a failure in free recalls isolated (normal cued recalls). Indeed, the cues improved the number of items recalled comparably to healthy controls, giving a proof that items was stored in the memory but not available at the moment. Furthermore, an association of storage and retrieval deficit was diagnosed in the case of a failure in free recalls and a failure in cued recalls, but with limited improvement (incomparably to healthy controls) of the total number of items recalled with cue."

Despite this method, which sounds as if it was designed to make as many as possible be categorized as people with a memory problem, only 42% of those with brain tumors were classified as having a memory problem. We are left here with no good evidence of brain tumors causing substantial memory problems.  The finding that "no correlations between specific tumor locations and types of episodic memory impairment were found" (with only one minor exception) is consistent with the idea that memories are not actually stored in brains.

Another study of 121 patients with severe brain tumors (Stage III and Stage 4) gave four tests of working memory and two tests of episodic memory, finding that only 10%, 17%, 22%,  23%, 28% and 18% had a "clinically relevant deficit." Referring to radiation therapy to treat brain cancer, the paper "Effects of Radiotherapy on Cognitive Function in Patients With Low-Grade Glioma Measured by the Folstein Mini-Mental State Examination" says that "Only a small percentage of patients had cognitive deterioration after radiotherapy."

The study "Efficacy and Cognitive Outcomes of Gamma Knife Radiosurgery in Glioblastoma Management for Elderly Patients" studied 49 patients with the most severe type of brain tumor, a glioblastoma. Table 1 tells us the patients had a median tumor size of 5.4 centimeters (about two inches). According to Table 2, a year after radiation therapy, the average MMSE score of the patients was about 25, slightly below average, but still pretty good. Before the surgery, when the patients had lost a lot of their brain tissue due to tumors, the average MMSE score was a fairly good 27 (30 being the highest score possible). 

The study "Detrimental Effects of Tumor Progression on Cognitive
Function of Patients With High-Grade Glioma" is one of those studies that makes it hard to extract the most relevant data from it. The most relevant fact that I can extract from it is found in Table 1, where I see that the number of patients with very bad brain tumors (high-grade glioma) and normal cognitive scores (as tested by the MMSE) was 757, and the number with abnormal MMSE scores was only 389. 659 of these 757 patients with normal cognitive scores had Grade 4 brain tumors, the worst type. 

The paper "Prospective memory impairment following whole brain
radiotherapy in patients with metastatic brain cancer" gives us the MMSE cognitive test scores for 81 patients before and after they had treatment for metastatic brain cancer. The average score before the treatment was 27, and the average score after the treatment was 26 (Table 2). So there was no big difference. The link here says that a score of 24 or higher on the MMSE is considered "normal." According to Table 1, 23 of the patients had a brain tumor larger than 3 centimeters (1 inch). 

Below is a quote from the paper "Meningiomas and Cognitive Impairment after Treatment: A Systematic and Narrative Review."  The paper summarizes other papers studying the cognitive effects of a common type of brain surgery. The quote below refers to studies that compared cognitive function before and after brain surgery. We see some studies discussing a negative cognitive effect, but relatively few. The reported fractions are not very high.  There is also reference to a number of studies showing improvements in cognitive abilities after brain surgery. 

"Worsening of verbal, working and visual memory (9/22 studies) 
Worsening of complex attention and orientation (1/22 studies) 
Worsening of executive functioning (3/22 studies) 
Worsening of language and verbal fluency (2/22 studies) 
Worsening of cognitive flexibility (4/22 studies) 
Worsening in all neurocognitive domains (1/22 studies) 
Improvement in verbal, working and visual memory (3/22 studies) 
Improvement of complex attention and orientation (3/22 studies) 
Improvement of executive functioning (2/22 studies) 
Improvement of cognitive flexibility (1/22 studies)."

The same paper summarizes studies comparing those who had brain surgery with normal control subjects. We seem to have only scanty evidence of worse performance after brain surgery, because the reported fractions are low:

"Worse verbal, working and visual memory (2/22 studies)
Worse complex attention and orientation (1/22 studies)
Worse executive functioning (1/22 studies)
Worse language and verbal fluency (2/22 studies)
Worse cognitive flexibility (4/22 studies)."

The fractions quoted are small fractions such as 5% or 10%, and we don't know how much of a decline the "worse" refers to. When you also take into account that neuroscientists will typically be biased towards reporting declines in function after brain surgery rather than improvements or no differences (in accordance with their dogma that brains produce minds), it is not clear that we have here any very clear evidence of decline in cognitive function after this type of brain surgery. 

Below is a very notable case of the almost complete destruction of a brain by a brain tumor, but with a high preservation of mental function. It comes from page 71 of the document here (and the newspaper story here repeats the same details).

high cognition with little brain

Overall, these results are quite compatible with the idea that you brain does not make your mind, and the idea that your brain is not any storage place of your memories. 

I can recall a personal experience here. Years ago I traveled to see a beloved relative who died by the spread of metastatic breast cancer. When I met her I saw a very noticeable tumor protruding from her head. I seem to recall the skull protrusion being about the size of a fist, or nearly as large. I can assume that a very large part of the brain had been destroyed by the cancer. But when I talked to her, I could notice no change in her cognition. I asked her an important question about events in the past, and she gave a meaningful, detailed answer with relevant examples provided.  I left, and a few weeks later she died. The lack of disturbance in cognition, speech and recollection in someone with a very visually noticeable brain tumor was striking. 

Thursday, February 13, 2025

The Reason You Will Never Be Able to Upload Your Mind Is the Same Reason You Won't Ever Need To

A very accomplished technologist and inventor, Ray Kurzweil has become famous for his prediction that there will before long be a “Singularity” in which machines become super-intelligent (a prediction make in his 2005 book The Singularity Is Near). In his 1999 book The Age of Spiritual Machines,  Ray Kurzweil made some very specific predictions about specific years: the year 2009, the year 2019, the year 2029, and the year 2072. Let's look at how well his predictions for the year 2019 hold up to reality. 

Prediction #1: “The computational ability of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second)."

Reality: A $4000 computing device in 1999 dollars is equivalent to about a  7700 dollar computing device today. There is no $7700 computing device that can compute even a hundredth as fast as 
20 million billion calculations per second. The fastest current processor for a machine under $8000 is the Intel Core i9-14900KS, with a clock speed of about 6 gigahertz, only about 6 billion operations per second. If you shell out about 8000 dollars for a high-end gaming computer, you can get a few teraflops of floating point calculations per second.  Even if we use that figure rather than the clock speed, we still have computing capability more than 1000 times smaller than the computing capability predicted by Kurzweil for such a device in 2019. 

Prediction #2: “Computers are now largely invisible and are embedded everywhere – in walls, tables, chairs, desks, clothing, jewelry, and bodies.” 

Reality: Nothing like this has happened, and while computers are smaller and thinner, they are not at all "largely invisible."

Prediction #3: “Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory 'lenses,' are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality.” 

Reality: Such things are not at all used “routinely,” and I am aware of no cases in which they are ever used. There is very little communication through virtual reality displays, and when it is done it involves bulky apparatus like the Occulus Rift device, which resembles a scuba mask.

Prediction #4: “People routinely use three-dimensional displays built into their glasses,  or contact lenses. These 'direct eye' displays create highly realistic, virtual visual environments overlaying the "real" environment.

Reality: Very few people use any such technology, even in the year 2025. 

Prediction #5: “High-resolution, three dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity." 

Reality: This sounds like a prediction of some reality similar to the Holodeck first depicted in the TV series Star Trek: The New Generation, or a prediction that realistic virtual sex will be available by 2019. Alas, we have no such things.

Prediction #6: “Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers.”

Reality: Paper books and documents are used much less commonly than in 1999, but it is not at all true that most learning  occurs through “intelligent, simulated software-based teachers.” 

Prediction #7: “The vast majority of transactions include a simulated person.”

Reality: A large percentage of transactions are electronic, but very few of them involve a simulated person.

Prediction #8: “Automated driving systems are now installed on most roads.”

Reality: Although there are a few self-driving cars on the road, 99% of traffic is old-fashioned traffic with human drivers.

Prediction #9: "Most flying weapons are small -- some as small as insects -- with microscopic flying weapons being researched."

Reality: The public has not yet even heard of tiny flying weapons.

Prediction #10: "The expected lifespan...has now substantially increased again, to over one hundred." 

Reality: A November 2023 article is entitled "Life expectancy for men in U.S. falls to 73 years — six years less than for women, per study."

Prediction #11: "Keyboards are rare."

Reality: No, keyboards were not rare either in 2019 or today. 

Prediction #12: "The majority of 'computes' of computers are now devoted to massively parallel neural nets and genetic algorithms."

Reality: Not true. So-called genetic algorithms are pretty useless as a computing methodology.  Software is not significantly developed through any Darwinian means, because random mutations with survival of the luckier results is not a workable method for creating very complex systems or very complex functional innovations, contrary to the claims of Darwinist biologists. Darwinism has flunked the software test.  

So Kurzweil's predictions for 2019 were very far off the mark. But Kurzweil is still playing the role of Grand Techno-Prophet. In a Freethink.com article last year, Kurzweil predicted that in the 2030's nanobots (microscopic robots injected into the body) will produce a great increase in lifespan. In the same article he predicted that the uploading of human minds into computers will occur by the 2040's.  Are there any reasons to think that his predictions for the 2030's and 2040's are unlikely to be correct? There certainly are. 

One reason is that Kurzweil never did much to prove his claim that there is a Law of Accelerating Returns causing the time interval between major events to grow shorter and shorter. On page 27 of The Age of Spiritual Machines he tries to derive this law from evolution, claiming that natural evolution follows such a law.  But we don't see such a law being observed in the history of life.  Not counting the appearance of humans, by far the biggest leap in biological order occurred not fairly recently, but about 540 million years ago, when almost all of the existing animal phyla appeared rather suddenly during the Cambrian Explosion.  No animal phylum has appeared in the past 480 million years. So we do not at all see such a Law of Accelerating Returns in the history of life.  There has, in fact, been no major leap in biological innovation during the past 30,000 years. 

Kurzweil's logic on page 27 contains an obvious flaw. He states this:

"The advance of technology is inherently an evolutionary process.  Indeed, it is a continuation of the same evolutionary process that gave rise to the technology-creating species. Therefore, in accordance with the Law of Accelerating Returns, the time interval between salient advances grows exponentially shorter as time passes." 

This is completely fallacious reasoning, both because the natural history of life has not actually followed a Law of Accelerating Returns, and also because the advance of technology is not a process like the evolutionary process postulated by Darwin.  The evolutionary process imagined by Darwin is blind, unguided, and natural, but the growth of technology is purposeful, guided and artificial.

On the same page, Kurzweil cites Moore's Law as justification for the Law of Accelerating Returns.  For a long time, this rule-of-thumb held true, that the speed of a transistor doubled every two years. But in 2015 Moore himself said, "I see Moore's law dying here in the next decade or so."  In the Wikipedia.org article on Moore's Law, we read, "
Some forecasters, including Gordon Moore,[122] predict that Moore's law will end by around 2025." It is now clear that fundamental limits of making things smaller will cause Moore's Law to stop being true before long. 

 Machines smarter than humans would require stratospheric leaps forward in computer software, but computer software has never grown at anything like an exponential pace or an accelerating pace.  Nothing like Moore's Law has ever existed in the world of software development.  Kurzweil has occasionally attempted to suggest that evolutionary algorithms will produce some great leap that will speed up the rate of software development. But a 2018 review of evolutionary algorithms concludes that they have been of little use, and states: "Our analysis of relevant literature shows that no one has succeeded at evolving non-trivial software from scratch, in other words the Darwinian algorithm works in theory, but does not work in practice, when applied in the domain of software production." 

Page 256 of the document here refers to problems with software which throw cold water on hopes that progress with computers will be exponential:

"Lanier discusses software 'brittleness,'  'legacy code,' 'lock-in,' and 'other perversions' that work counter to the logic of Kurzweil’s exponential vision. It turns out there is also an exponential growth curve in programming and IT support jobs, as more and more talent and hours are drawn into managing, debugging, translating
incompatible databases, and protecting our exponentially better, cheaper, and more connected computers. This exponential countertrend suggests that humanity will
become 'a planet of help desks' long before the Singularity."

We already have something like this with so-called artificial intelligence systems, which by now involve databases and code so complex that no one adequately understands them. We hear talk of AI "hallucinations" that sound unfixable because the AI systems are "black boxes" that humans cannot dive in and debug as they would some code written by humans. 

As for Kurzweil's predictions about nanobots producing a surge in lifespan during the 2030's, there are strong reasons for doubting it.   Futurists have long advanced the idea that tiny nanobots might be injected into people, to circulate through the human body to repair its cells. But there may be technical reasons why such things are technically unfeasible. Nobel Prize winner Richard Smalley argued that the molecular assemblers imagined by nanotechnology enthusiast Eric Drexler were not feasible, because of various scientific reasons such as what he called the “fat fingers” problem.

There is another strong reason for rejecting the idea that nanobots will be able to produce some great increase in human lifespan. The reason is the stratospheric complexity of human biology and the vast levels of organization in human bodies.  Transhumanists have generally tended to  fail to understand the stratospheric amount of organization and functional complexity in living things.  An objective and very thorough scholar of biological complexity will be very skeptical of any claim that devices that humans manufacture will be able to make humans live very much longer. There is so very much about the fundamentals of human life that biologists still don't understand.  The visual below illustrates the situation. The problems listed are problems a hundred miles over the heads of scientists. 

problems scientists don't understand

While scientists can list stages in cell reproduction, scientists are unable to even explain the marvel of cell reproduction, which for many cells is a feat as impressive as an automobile splitting up to become two separate working automobiles. Scientists cannot even explain how protein molecules are able to fold into the three-dimensional shapes needed for their functions, shapes not specified by DNA.  Without resorting to lies such as the lie that DNA is a body blueprint, no scientist can credibly explain how a speck-sized zygote (existing just after female impregnation) is able to progress over nine months to become the vast hierarchical  organization of the human body.  No scientist has  a decent physical explanation of how memories can form, or how they can persist for a lifetime. No scientist has a decent explanation of how humans are able to instantly recall lots of detailed relevant information as soon as they hear a name mentioned, or see of photo of someone, a feat that should be impossible using a brain that is completely lacking in sorting, indexes and addresses (the type of things that make instant recall possible).  

With human knowledge being so fragmentary, and so many basic problems of explaining humans and their minds being so far over the heads of scientists, how improbable is it that humans will be able to overhaul their biology on the microscopic level by using microscopic robots called nanobots, or any other technology we can envision in the next fifty years?

transhumanists

The biggest reason for doubting Kurzweil's predictions beyond 2019 is that they are based on assumptions about the brain and mind that are incorrect. Kurzweil is an uncritical consumer of neuroscientist dogmas about the brain and mind. He assumes that the mind must be a product of the brain, and that memories must be stored in the brain, because that is what neuroscientists typically claim. If he had made an adequate study of the topic, he would have found that the low-level facts collected by neuroscientists do not support the high-level claims that neuroscientists make about the brain, and frequently contradict such claims. To give a few examples:

  • There is no place in the brain suitable for storing memories that last for decades, and things like synapses and dendritic spines (alleged to be involved in memory storage) are unstable, "shifting sands" kind of things which do not last for years, and which consist of proteins that only last for weeks.
  • The synapses that transmit signals in the brain are very noisy and unreliable,  in contrast to humans who can recall very large amounts of memorized information without error.
  • Signal transmission in the brain must mainly be a snail's pace affair, because of very serious slowing factors such as synaptic delays and synaptic fatigue (wrongly ignored by those who write about the speed of brain signals), meaning brains are too slow to explain instantaneous human memory recall.
  • The brain seems to have no mechanism for reading memories.
  • The brain seems to have no mechanism for writing memories, nothing like the read-write heads found in computers.
  • The brain has nothing that might explain the instantaneous recall of long-ago-learned information that humans routinely display, and has nothing like the things that allow instant data retrieval in computers.
  • Brain tissue has been studied at the most minute resolution, and it shows no sign of storing any encoded information (such as memory information) other than the genetic information that is in almost every cell of the body.
  • There is no sign that the brain or the human genome has any of the vast genomic apparatus it would need to have to accomplish the gigantic task of converting learned conceptual knowledge and episodic memories into neural states or synapse states (the task would presumably require thousands  of specialized proteins, and there's no real sign that such memory-encoding proteins exist)
  • No neuroscientist has ever given a detailed explanation of how such a gigantic translation task of memory encoding could be accomplished (one that included precise, detailed examples).
  • Contrary to the claim that brains store memories and produce our thinking, case histories show that humans can lose half or more of their brains (due to disease or hemispherectomy operations), and suffer little damage to memory or intelligence (as discussed here). 

Had he made a study of paranormal phenomena, something he shows no signs of having studied, Kurzweil might have come to the same idea suggested by the neuroscience facts above: that the brain cannot be an explanation for the human mind and human memory, and that these things must be aspects of some reality that is not neural, probably a spiritual dimension of humanity.

Since he believes that our minds are merely the products of our brains, Kurzweil thinks that we will be able to make machines as intelligent as we are, and eventually far more intelligent than we are, by somehow leveraging some "mind from matter" principle used by the human brain. But no one has any credible account of what such a principle could be, and certainly Kurzweil does not (although he tried to create an impression of knowledge about this topic with his book How to Create a Mind).   We already know the details of the structure and physiology of the brain, and what is going on in the brain in terms of matter and energy movement.  Such details do nothing to clarify any "mind from matter" principle that might explain how a brain could generate a mind, or be leveraged to make super-intelligent machines. 

In general, transhumanists tend to be poor scholars of four very important topics:

(1) Transhumanists tend to be poor scholars of biological complexity and the vast amount of organization and fine-tuned functionality in human bodies. So they make naive claims such as that injected microscopic robots will soon be able to fix your cells and double your lifespan, failing to realize the unlikelihood of that, given the vast complexity of interdependent components in the human body and the vast complexity of human cells and the biochemistry of human bodies. 
(2) Transhumanists tend to be poor scholars of genetics and human DNA. A proper study of the topic will help you realize that genes do not explain either the origin of the human body or the characteristics of the human mind. DNA merely has low-level chemical information, not high-level anatomical information, and not any information that can explain mind or memory.  We have an example of very bad transhumanist misunderstanding of DNA on page 2 of Kurzweil's book "How to Create a Mind" where he incorrectly states, "A billion years later a complex molecule called DNA evolved, which could 
precisely encode lengthy strings of information and generate organisms described by these 'programs.' " DNA has no specification of how to make an organism or any of its organs or any of its cells. There are no programs in DNA, but merely lists of chemical ingredients, such as which amino acids make up a particular protein.  The idea that an organism is built by DNA is childish nonsense. DNA has no blueprint for constructing an organism, and even if it did, it would not explain how organisms get built, because blueprints don't build things. Things get built with the help of blueprints only when intelligent agents read blueprints to get ideas about how to build things. 
(3) Transhumanists tend to be poor scholars of human minds,  and the vast diversity of human experiences, normal and paranormal. A proper study of two hundred years of evidence for paranormal phenomena leads to the conclusion that humans are souls, not products of brains. 
(4) Transhumanists tend to be poor scholars of human brains and their physical shortfalls which rule out brains as a credible explanations for human minds.  Ask a transhumanist about things such as the average lifetime of brain proteins or the transmission reliability rate of chemical synapses, and you won't be likely to get the right answer.  


"I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more." 

Not true at all. Countless writers predicted that we would have machines as smart as people by the year 2000, and in 1999 futurists were typically predicting that this would happen by about this year (2025). Page 12 of the paper here gives a graph showing that 8 experts predicted that computers would have human level intelligence by about the year 2000, that 9 other experts predicted that computers would have human level intelligence by the year 2025, and that 12 other experts predicted that computers would have human level intelligence by the year 2030. In the same interview, Kurzweil makes the gigantically false claim that our understanding of biology is progressing exponentially. To the contrary, biologists are stuck from an explanatory standpoint, still light-years away from credibly explaining  how there occurs the most basic aspects of biology such as consciousness, cell reproduction, morphogenesis, human self-hood, human understanding,  instant human recall, the creation and formation of memories by humans, and the progression from a speck-sized zygote to a full human body. Biologists cannot credibly explain the origin of life or the origin of any type of protein molecule or any type of cell in the human body, such things being vastly too complex and vastly too organized to be credibly explained by Darwinian ideas. From an explanatory standpoint, neuroscientists are stuck in the mud, and getting nowhere, as they have been for decades. You might think otherwise from all the press releases about low-quality experimental studies guilty of defects such as way-too-small study group sizes. 

On page 4 of his 2013 book "How to Create a Mind" Kurzweil stated most incorrectly, "We are rapidly reverse-engineering the information processes that underlie biology including that of our brains." Nothing of the sort was being done then, and nothing of the sort is being done now. Biologists lack any understanding of how a human can think or learn or recall by any brain mechanism, and when they try to sound like they have some knowledge of such things, we usually get nothing but vacuous hand-waving. There are some very successful types of computer programs that have misnomer biological-sounding names such as "neural nets," but such programs do not actually have characteristics matching those of the brain or any part of the brain. 

The main reason you will not ever be able to upload your mind into a computer is that such an idea is based on the assumption that your brain creates your mind and that your brain stores your memories: an assumption that is not correct.  You are not your brain, but a soul. But you need not worry a bit about the impossibility of uploading minds into computers. The very reality that makes such a thing impossible is a reality that makes such a thing unnecessary. You do not need to upload your mind into a computer, because your mind is a soul reality that will not die when your brain and body dies.  One of the main types of evidence for this are out-of-body experiences, with quite a few features beyond any credible explanation of neuroscientists (as discussed here).