Why True AI Will Never Happen
While it may be "artificial," it is not, nor will it ever be, "intelligence." These algorithms, while powerful, expose for us the hubris of the technologist: that we can breathe life into machines.
Much of our lives are run by technology, and much of technology is run by sophisticated mathematical algorithms. As they have become more powerful, these machines seem to act more alive. This is a lie. A sophisticated lie, granted, but still a falsehood. They are a simulacrum. They seem to be able to write and dialogue with us. They seem to be able to produce art. But all of it is a complex shadow puppet game. There is intelligence involved, but it is not in the machines or the algorithms. The intelligence belongs to the technicians and programmers, the creative, intelligent human beings who invented and made these marvels of technical wizardry. The sooner we divest ourselves of the mythology that has come to surround these amazing machines, and they are amazing, the better able we will be to deal with their place and role in our society.
What is Intelligence?
The only really honest answer to that question is that no one really knows. We see evidence of it in the people we meet all the time. This is where the confusion arises for most. Our perception of intelligence is largely based on what we see and experience in others. But once you look under the hood and ask the question: how does it all work?, things get much more murky. So, you type something into the ChatGPT dialogue box, and within seconds you will start receiving back fairly competent and, at times, sophisticated answers. If you look at surface appearances, it seems to act a lot like an “intelligent” entity. But is it? A clever person of reasonable intelligence can quickly start exposing the flaws and limitations of a program like ChatGPT. But it does, for many, raise a lot of frightening questions. At the top of the list would have to be: will it eventually be able to think on its own?
So what is intelligence? Is it the mere ability to make calculations? At this point computers and algorithms can do complex calculations faster and more accurately than any human can. But is that the only measure of intelligence? How does one decide which calculations to make? How do we come up with the underlying mathematics that makes up the algorithms? Is the machine aware that it is doing calculations? It is one thing to teach a sentient animal, for example, to respond to various symbols and stimuli, but it is another thing for the animal to reflexively think about the fact that it is responding to the symbols. With computer algorithms the programmers are merely producing a set of very sophisticated responses to various input data. The claim that by feeding ever more amounts of data into the databases, that the algorithms are “learning” is also untrue. All they are are doing is exercising the pre-programmed instructions with greater and greater precision. The machine itself has not learned anything. It is not aware that is knows anything. It is not conscious and able to think about the data which is fed into it.
When humans learn, especially at a young age, the physical parts of our brain build themselves around what we are learning. On its own, your body actually adapts itself to the learning you do. The more proficient you are at a task, the less energy it takes for you to do it, the less conscious you are of doing it. Your brain structure gets faster and better adapted to a task the more that you do it. This is where the term “muscle memory” comes from. Computers and robots, the hardware and software do not, nor can they adapt in this way. The structure of the computer chips are not changing. The algorithms are not being re-written by the data. From an end user perspective, the greater precision of algorithmic output from larger datasets may look like learning, but it is not. If there is learning and adaptations taking place, it is happening in the minds and bodies of the engineers.
Self-reflection is not the only problem with the idea of machine “intelligence.” The biggest challenge is that we really don’t know where intelligence comes from or how to measure it. We kind of know it when we see it. How does skill with one’s hands crafting objects in material reality differ from dealing with concepts and abstractions? Is a woodworker any less intelligent than the Plato scholar or the mathematician? Where do words come from? How is it that we attach various associations to words and other stimuli that vary from person to person, and even from one time to another within myself. Why does a song make me sad; but the same song fills you with joy? Much of our cognition happens out of sight in pre-consciousness and the subconscious. How do you account for in an algorithm the fact that your mind sifts and filters all the stimuli you receive before you are aware of the fact that you have received it, making decisions about what you are “allowed” to see, hear, smell, taste and touch. How can you write an algorithm for a process which is entirely opaque to our consciousness. And this brings us to main problem with these algorithmic simulations: legibility.
The Problem of Legibility
Everything which happens in a computer, even a sophisticated one, is completely, 100% legible. What does this mean? From the structure of the computer chips, their physics and chemistry, their design and manufacture, there is nothing about the chip which isn’t apparent. There is nothing hidden. Its structure is there and can be read and understood. It is the same with the software. All of it can be read. No matter how complex or sophisticated, it can always be read and understood. There are nothing hidden in the physical structure or the symbols. It is always all there to be read by us. This is also true for the databases. No matter how large, they can always be read. There is no “hidden” data, programing, or architecture which operates outside of our ability to read it.
Human consciousness is very different. While there are certain legible and measurable structures and these can be observed, much of the bulk of what makes up our thinking and consciousness occurs out of sight. It is illegible. Who can read the content of our dreams? How do words come to us? These words are forming as I type them. Am I aware of them as I form them? Often I don’t know what I think until after I have spoken it or written it. The act of writing is the act of thinking. The words that emerge and are actually written are often very different than the words that I intended to write. Then, once written, they take on a kind of fixity that is then hard to change. I will engage in an argument over words that were, mere seconds ago, nowhere and not the words that I had intended to write. Where do they come from? Why is one person better at producing them than another? We don’t really know. We know that intelligence does happen. We can see evidence of it, but we really don’t understand at all how it arises. If this is the case, how are we supposed to make machines “intelligent” if we have no idea why we are intelligent or how this intelligence comes into existence?
Human Nature and Personhood
In part, this debate is hamstrung by two philosophies which have both in their own way been deeply destructive. One is philosophical materialism, which argues that matter is all that there is. The argument is essentially that the entire universe is, in the end, legible. Everything about the universe that happens can be observed, measured and understood. Everything is the result of observable material processes. Because of this, everything that happens is determined by an unbroken chain of cause and effect. Your thoughts, your consciousness, is a mere artifact of a bio-chemical process, fully determined by the physics and chemistry of your brain. The problem with this theory is that it breaks down when confronted with the mystery of human consciousness and will. It simply cannot explain how our mind, our consciousness comes to be. How is it that I can be aware of my physics and chemistry and make multiple choices based on that awareness. This should not be possible if humanity were merely fully legible bio-machines. You can see how if one is a materialist and believes that human beings are fully legible, material beings and nothing more, that it should be possible to replicate human consciousness in a fully legible machine. But the problem is that human consciousness is not legible and its relationship to the physical is not really understood.
The second problematic philosophy is that of nominalism. Nominalism, in a battle with philosophical realists who argued that metaphysical concepts are inherently part of the structure of the world and that thinking is a process of recognizing the ideas that are already a real part of that same structure, argues that concepts are merely the products of human consciousness. So any connections we make between things, the generalizing that we do, are all ideas that originate within human consciousness and are applied to the world by us. Meaning and connection is not something inherent in the world itself. Meaning is something we as human beings give to the world. This idea was instrumental in the genesis of the scientific method, but it was hugely destructive upon our understanding of ourselves as human beings.
Prior to the introduction of these two ideas, it was fairly commonplace to understand that we as human beings have an innate human nature. There is a certain something that binds us all together as human beings that cannot be fully understood materially and that exists prior to our apprehension and understanding of it. As an aside, this notion of “human nature” is vital for the outworking of Christian salvation through the full human nature of Jesus. The saying of St. Athanasius, “what is not assumed is not redeemed” is based on this idea of a real human nature.
Modernity hates the idea, largely because they find it too morally restrictive. They dislike the idea that homosexual behavior is a violation of our human nature. They dislike the idea that transgender surgery and drug therapy is a violation of human nature. If there is no human nature then human beings are infinitely malleable based on our apprehension and understanding what a human being is.
So why does this matter? Once we embrace the idea of a human nature, it is much easier to understand that all the legible parts of ourselves are wrapped up in the characteristics that we share with each other as human beings. Things that we use to distinguish us from one another “scientifically” should be properly understood as something that binds us together. Instead of trying to classify each other and separate ourselves based on differing personality traits, we can recognize that personality is something that binds us together. Once we understand that we have a legible part of ourselves rooted in our shared human nature, we are ready to recognize that each of us also has a unique human person that is utterly illegible. There is something about each of us that makes us uniquely us, different from any other person. There are no words that can communicate this essence, as the use of language is a shared human trait. Our essence is beyond knowing. If this sounds a lot like the theology of God’s essence and energy, it is because it is. I would encourage you to read Vladimir Lossky’s Orthodox Theology and The Mystical Theology of the Orthodox Church to understand this more fully. Because we are made in the image of God, who we are as persons is ultimately unknowable. We can meet one another and experience each other’s energy, but in the end we are, in our essence, not legible.
But, yet, we can be known. But what is it that we know? It is the terror of the question, “Why do you love me?” We make the connection. We meet each other and peer deeply into the other’s soul and we know, but what we know is beyond the capacity for words to express. As an aside, this is why you cannot build community over social media or a zoom meeting. For our purposes, it is also why the algorithms will never be intelligent. By their nature they are fully legible and would require rendering legible the human person, that portion of ourselves that is unique to us and cannot be rendered legible in any form, including language.
Language and Meaning
We have to understand that words, whether in the form of spoken or written language are artifacts of intelligence, but they contain no meaning of their own in and of themselves. When we produce language, we have within us meaning that we intend for those words to carry. But once the words leave us, they, and their meaning, are beyond our control. When the hearer hears or reads those words they attach meaning to those words. Meaning and the words themselves are separate. Meaning, in many ways, exists prior to the words. The algorithmic production of words may appear to generate meaning with the words, but this is an illusion. The algorithm is constructed in the way that it is constructed because programmers dictate which words should be put together with other words using rules of grammar and the meanings which they intend when they construct the algorithm.
At all times with the algorithm it is completely legible. It is merely a set of symbols being manipulated in ways stipulated by the programing and the constraints of both the hardware and software, all of which is legible artifacts. Whatever meaning there is in the words, it is given to them by the programmers or the persons who read them. The machine may spit out the word “dog” in conjunction with other words based on the algorithm, but that same word will evoke terror in one person who was bit by a dog as a child; whereas with another it will be a gentle reminder of a loving association of sights, sounds and even smells. But the machine and the algorithm are not conscious, pre-conscious or subconsciously aware of any meaning associates with the symbols. It just spits the word out as a result of a product of a complex mathematical formula. What meaning is contained in this word? What meaning is intended? What meaning is generated upon reception? All of them? Some of them? If my experience of “dog” is uniquely my own, part of my unique illegible personhood, how do you then render this legible for the algorithm?
Can the machine reflect consciously about its preconscious and unconscious associations with the words? Of course not. The machine is a mere simulation of intelligent personhood. One might ask, at what point does the end user simulation, if it becomes sophisticated enough, then become the thing it intends to simulate? This is the argument now being employed in the transgender movement. If I am born in a male body but can present myself as a female in such a way that the two are indistinguishable, does this not then render me a woman? No, it does not. Just in the same way a sophisticated simulation of consciousness does not thereby make it conscious and genuinely self-reflective.
The point raised at the beginning of this section begs the question as to how meaning can be communicated at all? It is challenging. Why do you think that misunderstandings are so common? In the end, understanding happens in a kind of spiritual coming together. This is easiest in community, where people live constantly in close contact with each other. We see this in married couples or deep friendships where communication can happen wordlessly or where one finishes the sentences of the other. In this sense you might argue that there is a kind of shared consciousness or intelligence.
It is here, on this point, that many hold out hope for true intelligence to emerge within machines. The idea is that you would keep people constantly connected and plugging data all the time into algorithms which get every more sophisticated as programmers understand the data better, reaching a point where this kind of collective human consciousness simply emerges as its own distinct conscious entity in and through the machine as the medium. It is the kind of wishful thinking that keeps people believing that human consciousness spontaneously emerged through the random collision of atoms and particles. Perhaps they saw it on TV and because it was on TV, it must be real. I suspect that this kind of algorithmic social media will merely result in people feeling less connected and lonelier, rather than it spontaneously producing some giant super consciousness.
The key thing to remember with the use of all these algorithms, no matter how sophisticated the programing or powerful the hardware, that it is merely the manipulation and moving around of legible symbols in ways predetermined by and limited by their programming and hardware. What they lack is the very thing that makes human beings, well, human. They will always lack this. Thus, they will never produce actual intelligence.
I found this article quite illuminating. As a professional programmer, I have been reading about artificial intelligence for some time and trying to understand why so many of my fellow programmers believe that AI systems are becoming self-conscious and intelligent in the same way as human beings. Recently, there was a presentation by experts called the A.I. Dilemma (https://www.youtube.com/watch?v=xoVJKj8lcNQ) in which they argued that large language models such as ChatGPT were directing their attention to new problems independently of their human programmers. The presenters claim that these new capabilities have appeared spontaneously and that we do not understand "... how they show up, when they show up, or why they show up."
Being dubious about such claims, I looked at a philosophical treatment of AI by philosopher Dennis Bonnette who argued that artificial intelligence is an oxymoron, i.e. if a being is intelligent, it can't be artificial. I wrote an article on my substack titled "The God in the Machine" in which I demonstrate that AI can't be intelligent because 1) No combination of non-living entities can have either perception or thought because these noetic activities are beyond the capability of mechanical components, whether alone or in combination with each other. 2) Computers are incapable of understanding ideas because they are not physical objects, but exist in an intelligible realm where ideas such as numbers and virtues exist. Only creatures able to sense spiritual realities can discern such ideas. Because of their nature as physical objects, machines will never have access to this realm.
Since many in my profession don't have the philosophical background capable of detecting the emptiness of the claims for artificial intelligence, I tried to make it clear why so much of the current fear-mongering about AI may have other motivations beyond their apparent attempt to warn computer professionals about the existential threat posed by AI. Since your arguments reinforce the same concerns I have about the empty claims of AI, I thought you might be interested in taking a look at arguments that reflect but have a different emphasis from your own. My article can be found here: The God in the Machine: https://christianpresence.substack.com/p/the-god-in-the-machine
Any feedback would be highly appreciated.