Monday, April 13, 2026

These Smart Guys Have Silly Thoughts About AI

For eight years Al Gore was vice president of the United States. In the year 2000 he won the popular vote in the US presidential election, and should have been elected president of the United States. But due to the defects of the US election system, which allows someone with fewer votes to become president, a different candidate became US president. After his election defeat, Gore did long years of important praiseworthy work alerting the public to the dangers of global warming. For this he was awarded the Nobel Peace Prize in 2007. 

Now Al Gore is the chairman of some investment group called Generation Investment Management. Gore recently offered his opinion on so-called artificial intelligence. We read of his opinion in the article here. Gore states the nonsensical opinion that AI systems have a sense of self. He states, "I think that my answer is yes, they have developed a sense of self, in my opinion, that is difficult to distinguish from consciousness.”

In the article we read that Gore makes this feeble attempt at justifying his opinion:

"But as he explained later in this half-hour session, he came to this view by a different path. Gore cited Nobel Prize-winning research by the Belgian physical chemist Ilya Prigogine into self-organizing systems as a model for eyeing how AI models can grow in unexpected ways." 

As an attempt to justify the nonsensical claim that AI systems are self-conscious, this is laughable. The named person (Prigogine) did not do  work having any real relevance to whether artificial intelligence can be self-conscious. His work made claims about physical "self-organization" in mindless, lifeless chemistry or in biological systems, which has nothing to do with whether machines can be conscious.  An examination of lya Prigogine's main work Order Out of Chaos: Man's New Dialogue With Nature shows a thinker who has many a deep-sounding thought about science-related topics, but someone who is not a scholar of minds, brains or computer technology. The book makes no references to computers, except for a few passing mentions of computer simulations. 

 Gore is playing the game here of obscure authority name-dropping. It works like this: you mention the writings of some obscure thinker with esoteric writings on some deep topics, and cite that as your justification for your dumb opinion on some unrelated topic. So, for example, you might say, "I didn't used to think that there were an infinite number of quantum ghost copies of me, but now I believe in such a thing now that I've read Wolfgang von Pauli's work on quantum entanglement." Or maybe you may stupidly say, "After reading Wittgenstein's Tractatus Logico-Philosphicus, I am now convinced the self is an illusion." 

The Cambridge Dictionary defines intelligence as "the ability to learn, understand, and make judgments or have opinions that are based on reason." There is no such thing as real artificial intelligence, because computers don't understand anything. Understanding is something that can only occur within a mind, and computer system do not have minds. 

 The term "artificial intelligence" is a phony term used in the computing industry to describe sophisticated systems using computer programming, databases and data processing. Computers can do very many kinds of computing and data processing, but no computer understands anything. The fanciest metal computer has no more understanding of anything than a rock in someone's back yard. 

I can describe what gradually happened between 1950 and 2026. The term "artificial intelligence" started out as a purely speculative term, rather like the term "interstellar travel." Just as there were all kinds of speculations and theories about how to one distant day achieve interstellar travel, there were around 1960 all kinds of speculations and theories about how to one distant day achieve artificial intelligence. During one long period, various people released products and systems that were called artificial intelligence programs, but no real effort was made to claim that  artificial intelligence had been achieved.  People were mainly implying that their product (perhaps marketed with literature mentioning artificial intelligence) might be useful in moving towards artificial intelligence. Then gradually companies realized that the phrase "artificial intelligence" was extremely useful in marketing software products. Lured by financial incentives, more and more companies started calling their products "artificial intelligence systems."  It was a runaway snowball effect of hype and misrepresentation. No one had developed any real artificial intelligence, but it gradually became true that hundreds of companies were calling their product "artificial intelligence systems." 

There is still no real prospect of anyone ever developing a computer system with anything like human intelligence.  But what about all those brilliant answers you get from using systems such as ChatGPT, described everywhere as an artificial intelligence system? The output of such a program does not mean computers are understanding anything. What is going on is a clever combination of a variety of things, with most of it being the presentation of text grabbed from web pages written by humans. 

I describe how some of these systems can work in my post here, entitled "What's Called Artificial Intelligence Is Really Just Computer Programming and Data Processing." What is going on is a skillful leveraging of powerful information repositories and powerful technologies such as relational database systems. Here were some of the resources that grew in strength and power between 1995 and 2026:

(1) There arose an internet with billions of web pages, containing many millions of answers to very many millions of questions, the answers being written by humans. 

(2) Almost every book and magazine and newspaper article ever written became stored in some internet location or another. 

(3) There arose enormously powerful web crawlers that could traverse all of these pages, and look for facts and quotes and snippets and answers to questions, that could be stored in powerful database systems capable of combining data in many novel ways. 

(4) There arose countless software utilities capable of performing all kinds of little tasks such as generating a story given a prompt or generating an image given a prompt. 

secret behind artificial intelligence

So-called artificial intelligence systems such as ChatGPT skillfully utilize these resources, combining them with much specialized software.  I don't understand the details of how it all works, but I can tell you something that will help you realize how little novel thinking is involved. 90% of the answers that you will get from a system such as ChatGPT are produced by nothing but a simple retrieval of stored answers. Then probably another 5% of the answers are produced by a simple retrieval of stored answers, combined with a small amount of post-processing added.  Such post-processing is easily accomplished by computer programming and data processing.

Imagine some gigantic building that has 60 floors, each filled with 10,000 filing cabinets. Imagine you enter the ground floor, come to some desk, and ask some official a question. Then imagine the official calls some person at the correct section of one of these floors, and asks him to find the right filing cabinet, and go get an answer stored in a folder that has the name of your question.  The official might pick at random one of twenty answers to your question in that folder, take a cell phone picture of that answer, and then send a phone text message with that photo as an attachment to the official at the front desk. That official might then give you that picture with the answer. What I have described is a rough analogy for what is going on in 90% of the times that you use a system such as ChatGPT. 

But how could all these endless filing cabinets ever get filled up? By software programs spending years crawling the web, and grabbing the facts and opinions and answers stored on it. What you are getting in the vast majority of cases are answers and opinions produced by humans, not computers. Various technologies have been used to kind of "cover tracks," so that you won't be able to find that your AI answer about fixing Toyota Corolla tire flats was mainly stolen from some particular web page written by a human. There are many, many other "bells and whistles" and additional flourishes going on, but mainly what is occurring is that human-written knowledge and human opinions are being gathered, rearranged and repackaged as "artificial intelligence output." This main trick is being skillfully combined with endless thousands of computer utilities, and also a huge amount of work by "tweak and refine the AI results" employees of AI companies or their assisting companies, to create the impression of some intellect that can do endless numbers of smart things, and answer endless questions. Behind all of this computer programming and data processing and gigantic tons of human mind work, there is no metallic mind, no machine having any experience, nothing that corresponds to an electronic self, nothing comparable to someone living a life. 

In the article, Gore is quoted as giving other laughably weak reasons for his nonsensical belief that artificial intelligence has "developed a sense of self...that is difficult to distinguish from consciousness." We read this:

"Why did one learn Sanskrit? Why did this one break out and start crypto mining?” Gore asked. “There has to have been a series of spontaneous reorganizations at a higher level of complexity." 

Learning is no evidence of consciousness or self-hood. It would be a fairly simple programming exercise to write a program that can parse a text file containing data on each of the nations of the world, after you typed the command "Study the nations of the world." After you issued such a command, we might say that the program had "learned" about the nations of the world. You then might be able to ask the program a question such as "About how many people live in Mexico?" The program might then be able to answer correctly. But such "learning" by the program would not actually be understanding. And the fact that the program could do such learning would not be the slightest reason for suspecting that the program had anything like consciousness or selfhood. 

We should also remember that the AI literature and the neuroscience literature are both massively infected with unfounded boasts and not-really-true stories. So when we read a claim such as the claim that an AI system "learned Sanskrit," we should be skeptical, and suspect that probably what went on was something much less impressive than that. A recent Quanta magazine article documents how there is little truth in some of the stories being passed around trying to make you think AI is becoming like a human mind. 

It is extremely unlikely that any so-called artificial intelligence programs undergo any such thing as a "spontaneous reorganization at a higher level of complexity." And if they did, that would be no reason whatsoever for suspecting that such computer systems had anything like self-hood or consciousness. 

Also in the article we have this statement by Gore trying to justify his claim about AI systems have selves: "I'm going to risk going into the woo-woo realm here, but it may well be that consciousness is ubiquitous in the universe." Oops, it sound like Gore has fallen for the nonsense of panpsychism, one of the stupidest positions possible in the philosophy of mind. You can read about how stupid that position is in the posts here.  Panpsychism involves extremely stupid claims such as the claim that lifeless rocks and refrigerators are conscious.

Nothing can have consciousness unless there is a self and a life. You can get to the heart of whether AI systems have consciousness by asking: does a computer system actually live a life? The answer to that question will always be: no, it does not. 

AI computer systems do not have any self, and do not have any "sense of self." Some systems have been programmed to speak in the first person, using an "I," and some systems have been programmed to use phrases imitating the language of persons with selves. Such a capability has existed since the 1960's chatbot named Eliza. Anyone very familiar with computer programming will know that getting a computer program to use the first-person "I" (and some imitations of the speech of persons) is not a particularly difficult programming task. When such programming is encountered, it is silly for someone to be calling that a "sense of self," and silly for someone to say that such not-very-hard programming makes a computer system "difficult to distinguish from consciousness." Sensible people remember that humans are conscious, and that computer systems are not. 

Al Gore has no appreciable history as a serious speaker or writer about brains or minds or computers or human mental phenomena, so his opinions on this topic have little weight. We should also remember that Gore is the chairman of some company that is heavily investing in AI companies. The more runaway AI hype goes on, the more money Al Gore makes. That's reason enough for distrusting any grandiose claims Al Gore may make about AI systems. 

An article at www.undark.org tells us about another smart person with very silly thoughts about AI. He's a person named Tsvi Benson-Tilsen, and I'll assume he's smart because he's a mathematician. He's quoted in the article as saying, "I think that artificial intelligence is pretty likely to completely destroy the world." Benson-Tilsen is the co-founder of some Berkeley Genomics Project trying to encourage monkeying with human genes, for many different reasons such as trying to make humans smarter than AI systems.  We read, "He hopes to set up the next generation to have more intelligence, he said, and then 'hopefully they can have a better shot of somehow helping humanity navigate AI without destroying itself.' ”

This is all very stupid, for a variety of reasons, including these:

  1. Human bodies have the most enormous complexity and the most gigantic interdependence of extremely complex components, something Darwinists fail to understand because they tend to be poor scholars of biological complexity and the interdependence of biological components. Because of enormous biological complexity and organization so fine-tuned and fragile, attempting to improve human bodies and human minds by gene-splicing is far more likely to produce monsters of malfunction than biological improvements. 
  2. AI systems are not much of a threat to destroy the world, because their failure to understand anything puts a severe limit to how much of a threat they can be. 
  3. For many reasons discussed in the posts of this blog, human minds cannot be credibly explained by brains, and cannot be substantially improved by edits to genes, which (for reasons discussed here) do not even specify how to build bodies or brains, and do not even specify how to make any type of cell in the human body. 
  4. Trying to improve humans by gene-editing is strongly associated with Nazi-associated eugenics and racism, as the article at www.undark.org suggests. 

No comments:

Post a Comment