AI Is Not
An argument in two premises (with footnotes!)
He must be very ignorant, for he answers every question he is asked
—Voltaire
Lately, there is a lot of talk about something people call “artifical intelligence”, as if this were something. A few people are even convinced that if they chuck enough “data” into a machine of some sort, that at some point there will emerge some kind of “superintelligence”. To explain this, they use what can only be called “Underwear Gnomes” Logic.1 More modest people speak of achieving “General Intelligence”, whereby machines will mimic the “output” of human intelligence, and debate about how close certain machines are to producing such outputs. Although somewhat more reasonable sounding, calling this all ‘intelligence’ a bit like debating which player piano will become the next Vladimir Horowitz or Thelonius Monk.
Sometimes philosophers talk of a category mistake, where something that applies to one type of being or situation is inappropriately ascribed to another sort of being or situation. These are, in literature, the fodder of comedy.2 And perhaps what we need now is comedy more than argument, but alas, ‘tis not my specialty.
In language analysis it is useful to talk of dead metaphors--things that were originally conceived of as metaphors but somehow became taken literally. “The fields of science and technology are full of these things”, the author types, before reaching for his mouse to move to a different screen. Sometimes it is remembered that they are metaphors, but they often are not. This led me to wonder if something akin has happened here--where the metaphorical juxtaposition of contradictories “artificial” and “intelligence” had oxymoronically been misunderstood as literal by oxymorons without the “oxy”.
After a moment of reflection, I decided it might be worth demonstrating, in a brief argument, why this term is only a metaphor, and an oxymoronic one at that. In brief, AI is NOT. If you are utterly convinced by this, or find the whole thing utterly unconvincing, you can skip the notes on the premises. But if you are a living intelligence--the only kind!--read on.
The Argument
1. All intelligence is a living activity (a)
2. No mechanical activity (including AI) is a living activity(b)
----------------------------------------------------
C: No mechanical activity (including AI) is intelligent
Notes:
(a) I imagine some might find this a bit question-begging. But I have to say this to those who oppose this claim: The burden of proof is on you!! That there is living intelligence is undeniable. After all, people are living, and some at least (okay, in principle, all) possess intelligence. Even animals have some sort of rudimentary intellect, since perception is a discernment of some sort, and discernment--the ability to distinguish between types of things--seems to be a crucial element in intelligence. I am aware that for us and for our animal brethren this can occasionally go wrong, but it seems to me that the capacity to recognize faulty perception or judgment is just a sign of successful perception or judgment.
Okay, so let us grant that there exists living intelligence. But I did say or imply that only living things possess intelligence. This is a somewhat stronger claim, not merely an empirical claim but an assertion of necessity. And I probably need this stronger claim to make the argument work. So I wanted to give some sense as to how it is that only a living thing would have intelligence, or perhaps more broadly, why living things have intelligence of some sort.
A more thorough discussion of what I adumbrate here is found in an essay by Hans Jonas.3 Intelligence involves some capacity to discern between different sorts of things. These things can be external things—in one’s world or environment. They can be internal things—one’s emotional or mental states. This capacity cannot simply be these distinguished things, for reasons made clear by Aristotle, because were intelligence simply a ‘thing’ it could not distinguish things, because it would just ‘be’ a thing. Another way of putting this—intelligence involves some sort of “negation” or “difference”, since it must minimally divide things into ‘this thing’ and ‘not this thing’. And so it somehow must incorporate opposites within itself, something no thing can actually do. Although I can consider being alive and dead in opposition, alas, I cannot really be both.4
The question then, is “do living things need to distinguish between things?” The answer is unequivocally, “yes”. To take the most simple example. Any animal needs to distinguish between ‘food’ and ‘not food’ in its environment.5 Thus animals have an interest in distinguishing things in their environment. Intelligence, in this account, is inseparable from interest or desire. First and most elementary are the desires to develop to maturity and for those activities necessary to such maturity. Then to reproduce and care for future generations of its kind. These alone account for a wide variety of perceptive, imaginative and calculative capacities, all geared toward making relevant distinctions, the criterion of relevance being, what is life-enhancing to the particular form of life in question, that which contributes to what Aristotle long ago called its eudaimonia, often unhappily translated as ‘happiness’ but better translated as ‘flourishing’.
My account thus far is a bit incomplete. For it seems to make of intelligence a mere tool, subservient to antecedent desires, and so leaves out what we might call the higher order activities of intellect, ultimately grounded in a sort of curiosity, a desire to know or distinguish for its own sake. This is not actually true, for the discernment that any species practices as a means to other ends becomes part of the very activity that defines its end. Intelligent discernment of prey is as much a predator activity as the pursuit, capture and consumption of the prey. But let us grant that they are intelligent activities that are not simply bound by animal necessities. Science, art, acts of individual or collective self-interpretation, all seem to share in discernment not so bound. But none of this changes the basic dynamic I am pointing to. In fact, it doubles down on it. For the capacity to distinguish in these realms of the intellect answers to a wish or desire to do so. The tether between the capacity to know and the desire to know seems one that is difficult to break, from the lowest to the highest levels. And, as far as I can see, it is only living things that have desire, including the desire to know taken playfully, or in the highest sense of the word leisure.
As far as so-called “machine intelligence” goes, I suppose I owe some explanation as to why this isn’t ‘intelligence’, and I can’t just tell you to read the end of part 5 of the Discourse on Method with particular care (although you should). After all, some might say, don’t the current fashion of AI produce speeches that can fool people into thinking that they are produced by a living intelligence? Isn’t producing speeches part of intelligence, at least of the human variety? To which I reply, that you must also think an old fashioned teletype machine or dot matrix printer is also intelligent. Yet surely no one thinks these machines intelligent.
I suppose one might say that these merely reproduce speeches of a human agent. And this would be true But, this also seems to be the case with LLMs. Now, they don’t reproduce speeches ‘word for word’. There is some complicated math problem that predicts likelihoods of words that go together given a large data set of previously “borrowed” speeches. But it is clear to me that statistical prediction of speech patterns based on past speech patterns is not the same as discernment of things in speech. In certain sports, prediction algorithms are based upon past data to make predictions about the likelihood of success of future actions in the game. But are we sure that the machine which calculates such “knows” what the statistics mean, in terms of what the activities of the game involve?6 I doubt it. Such statistical prediction, as wonderful as it is, is not discernment of what the things are that are predicted.
There is a body of literature trying to come to terms with what it is that the machine is doing when it ‘produces a speech’, or what to call said products. The common terms for erroneous speech products is ‘hallucinations’, but this doesn’t seem quite right, since one who hallucinates is attempting to model in her speech some sort of reality. The distinction implicit in the one who hallucinates between speech and reality is just as much a distinction as any other, and its unclear to me that LLMs distinguish at all. The human hallucinator is ultimately interested in discernment, whereas the machine has no such interest.
So I am somewhat more sympathetic to those who would call what they do ‘bullshitting’ following the definition made famous by Harry Frankfurt. A bullshitter is one who speaks with indifference to the difference between truth and falsity.7 But even such indifference acknowledges the distinction between true and false, even if deciding to ignore it. All of this involves distinguishing things, even negatively. And negation, as noted above, is the work of intelligence.
I don’t quite know how to talk about what this ‘production of speeches’ is in any other way than to say that there is some way that this is not at all what LLMs are doing, since a speech is something that includes a reference to things other than the speech itself, and its precisely this lack of reference is what defines ‘machine speeches’. The outputs of LLMs are, in fact, only significant or speeches insofar as they are read and understood by a speaking being—a human. Before this they are only pixels or outputs. Any other talk of this is merely metaphorical, since speaking precisely and literally, speeches are neither of these.
There’s a lot of loose talk out there that hasn’t done much thinking about what it might mean to give an account or make a speech. The mere physical act that produces an ‘artifact’ is not it. It involves an intention to distinguish between things, and this is only what a thing that lives can do, for only living things have intentions, because only living things have a relationship between their internal desires and the world around them. No machine has desires—an indication of the need to supplement internal resources with worldly resources—and consequently, it has no ‘world’ of things that it wishes to distinguish, whether in speech or not. A machine just is, as all non-living things are. It has as much soul as a stone.
(b) In a way, I’ve already dealt with the connection between living and machines, or rather, its absence, in the previous note. After all, if machines don’t have something akin to desire--and they don’t--then they don’t really live. But I will add a few comments here. Animals are born; machines are made. Even plants, the least sensitive of the living, have an odd beginning to an individual life, when viewed from a merely mechanical perspective. We can describe the mechanics of animate procreation, but there is something odd about considering this as reducible to, say, the mechanics of the production of the Seventy Three Thousandth microwave oven.8 Being born (or, if you will, coming to be of a living individual) is somewhat a mysterious process, whereas being made is as mysterious as making toast.
And try as they might, modern scientists have not been able to manufacture life out of non-living antecedents, nor do they seem any closer to understanding its origin in the universe as a whole in this manner. The reduction of life to material mechanics has been somewhat of a scientific fantasy since, well, part 5 of Descartes’ Discourse on Method. But this fantasy, is, ultimately, a grotesque--a Frankenstein project.
Now there are those who don’t take Mary Shelley’s book as a cautionary tale, but as some sort of instruction manual. I would argue that the ill fated doctor had more reason in him than those currently inspired by him. He had the sense to use formerly living tissue, whereas today’s descendants of the good doctor content themselves with manufactured tissue. Now the good doctor made the error of supposing that the key to living was in the mere physical form of the tissue, thereby confusing cause with effect, but at least he was indirectly acknowledging the cause.9 For while the stitched together body parts of the monster at least had proven themselves at one time as capable of serving life, the various parts of our modern monsters have never had such a proof.
This error has generally been the way of our modern scientific consciousness, which finds beneath the living and breathing animal something in principle no different than a clock, as Descartes himself said. The attempt to reduce life to its material antecedents and laws of mechanics, then has a long history and has been the begged question in modern thinking about living nature. One might wonder why this has been the primary reduction and not the other way around—that the non-living is, in all its forms, oriented towards and anticipations of the living. This older way of thinking—that the stones themselves would cry out—is regarded as so much superstition, but is prima facie no less rational.10
I suspect that it is not reason that lies behind the wish to reduce life (or sou’) to a series of material antecedents. Bacon and Descartes talk in their works of the desire to become ‘like masters and possessors of nature’. Descartes, at least, was clever enough to qualify this desire as a simile, and further qualify it by warning that his work might contain examples that ought not to be imitated, and acknowledge the need for an infinite number of machines to conquer an infinite number of maladies. At best, Descartes thought the infinite project of progressively mastering nature was a worthy collective task to divert humans from the religious and cultural narratives of meaning that ordinarily consume their passions.
His disciples have at times not been so careful, but I wonder if the more careful carries within it the same consequence here as the careless. One must consider what it would mean to have nature as a possession or as subject to human mastery, whether understood as an asymptotic progress or as a final apotheosis of victory. It would mean that nature have within herself no purposes or intents, but would find her intents given from without by human intention and desire. Living things are then quite an obstacle, for it is quite clear that living things possess intentions independently of our human use of them. A cow is not simply beef, although this is its value to the diner. We moderns are always at table. That we ourselves are also living, and following this logic have no native intents or purposes, we recklessly celebrate as a sort of liberty, not realizing that in so doing we invite ourselves, like all other purposeless animate machines, to be recklessly dined upon.
So in the end there is a close connection between the inability of machines to have intellect and have life—both require, as noted above, something like an interest in the world, which only something born (and perhaps subject to dying) could have.
The notion of nature as something given its purposes and intents by some external purpose is a faint and distorted echo of the Christian notion of God as creator of the world outside of the world. This reveals, perhaps, the last level of our attempt to reduce the living to the non-living--a certain God-envy. Were we to somehow reduce the living to the non-living, and have mastery over the non-living, we could in some manner rival the divine. The living, dying being irreducible to the non-living thing is thus an affront to our desire to complete the Tower of Babel. It is hardly an accident that many of the same people who are inspired by the myth of superintelligence are also awed by the notion of overcoming death.
The irony is that we, in having such a desire, are living, dying beings, and we, too, then would become mere machines, and our very desires and intentions could only be described as the inevitable result of mechanical processes. The god we aspire to become, the great mortal Leviathan, in this case would be merely machine, one thus subject to some sort of purpose external to its own ‘wishes’. We would become subject to a sort of schizophrenia, both god and machine at once. The quasi-divine fantasies of a superintelligent machine would come to light as nothing but an image of the project of our modern consciousness.
To conclude briefly: the notion of a machine capable of intelligence is not terribly plausible. What is quite plausible is that a human being would be subject to an ultimately self-lacerating and curious imitatio Dei.
“If there were gods, how could I endure not being a god”.
--Nietzsche
In a classic episode of South Park, Cartman discovers that his underwear are disappearing at night; he discovers that they are being gathered by “underwear gnomes”. When asked to explain why, they reveal to the boys their three part plan. Step 1: Gather Underwear. Step 2: ?? . Step 3: Profit!
If only Aristophanes or Swift were alive today! Speaking of the latter, I do find myself wondering if, like Gulliver, I have woken up upon the island of Laputa. (If someone knows of a Swiftian working in this space, please let me know!).
“Cybernetics and Purpose: A Critique” in The Phenomenon of Life, (Evanston: Northwestern University Press, 2001)
There are various other activities of intelligence, of course, but they are ultimately derivative from discernment. Calculation, for example, or ‘reasoning’ generally, both depend upon not everything being seen to be a simple one, but requires distinguishing things according to their kinds, or more abstractly, which is to say minimally, into units. Furthermore, calculation, if divorced from discernment of things, is rather a kind of very boring poetry, i.e. most analytic philosophy.
Its worth taking a look at The Hungry Soul by Leon Kass, chapter 2, for a profound examination of animal eating and its implications.
This is why the issue of ‘operational definitions’ is so significant in statistical analysis.
When people refer to the coming to be of a new human life as “reproduction’, I have to suppress an immediate urge to gag at the using of a word for production of a multiplicity of identical “individuals” for the coming into existence of the unique individual human life. And I’ve been around enough dogs or cats to think it almost as puke-worthy in the case of animal life. Extending the word ‘individual’ to non-living instances of a mechanical prototype is yet another example of ignoring a metaphorical extension.
Of course, he also added some external source of ‘energy’, in this case, lightning. He acknowledged thereby the need for some cause beyond the material while interpreting it materially.
There is a counter to this but its subordinate status is best revealed in the large neglect of Hegel’s Philosophy of Nature.


Thank you for so thoroughly laying out what I’ve been feeling for a while now.
I’ll also need to revisit and ponder more deeply, but for now this Blake quote comes to mind: “Throughout all these Human Lands
Tools were made and Born were hands
Every Farmer Understands”
Fascinating read! Lots to unpack here and I'll definitely have to work through it again. A few thoughts or points of connection that came to mind as I read this:
"But are we sure that the machine which calculates such “knows” what the statistics mean, in terms of what the activities of the game involve?"
I think this relates to Luciano Floridi's research about AI as circumventing the signal grounding problem: "[LLMs are] powerful informational tools that mediate human access to accumulated knowledge, but they are not themselves knowers." https://arxiv.org/pdf/2512.09117
"The outputs of LLMs are, in fact, only significant or speeches insofar as they are read and understood by a speaking being—a human."
Love this note, and it relates so something similar I recently wrote about that looks at information according to its 'usefulness' to meaning making agents rather than its mathematical qualities. There's no meaning without a meaning maker (Chesterton)!
To the core question of the 'thinking' or 'intelligent' machine, I have a working theory which this essay brought to mind (perhaps orthogonal): One of the core challenges with defining "intelligence" is the underlying linguistic framework which conceptualizes language/words in 'atomistic' terms (outlined by Charles Taylor in "The Language Animal"): "language is a collection of independently introduced words." We think that words relate to some external entity or idea (1-1): intelligence is a 'thing' that can be defined and progress measured. But even Turing understood the challenge of defining 'intelligence', which is why he chose to evaluate computer thought in relation to something else (namely, human thought) with his "imitation game." This atomistic view of language enables a way of seeing our reality as a collection of individual objects. Even the notion of the 'passive observer' (which has fueled the sciences/enlightenment - your reference to Descartes and Bacon) gives us a false sense of our tools as standalone entities rather than extensions of ourselves. This draws back to your comment about LLM output being only significant within the context of human beings(!)...they cannot think, but are an extension of human thought (Marshall McLuhan has a whole chapter on this in "Understanding Media" called Automation). The interesting thing about LLMs is that the only reason they (seem to) work is that they are built on a constitutive model of language, rather than an atomistic one (often referred to as connectionist AI vs. symbolic AI).