Discussion about this post

User's avatar
Jonnohull's avatar

Thank you for so thoroughly laying out what I’ve been feeling for a while now.

I’ll also need to revisit and ponder more deeply, but for now this Blake quote comes to mind: “Throughout all these Human Lands

Tools were made and Born were hands

Every Farmer Understands”

Jens Jorgenson's avatar

Fascinating read! Lots to unpack here and I'll definitely have to work through it again. A few thoughts or points of connection that came to mind as I read this:

"But are we sure that the machine which calculates such “knows” what the statistics mean, in terms of what the activities of the game involve?"

I think this relates to Luciano Floridi's research about AI as circumventing the signal grounding problem: "[LLMs are] powerful informational tools that mediate human access to accumulated knowledge, but they are not themselves knowers." https://arxiv.org/pdf/2512.09117

"The outputs of LLMs are, in fact, only significant or speeches insofar as they are read and understood by a speaking being—a human."

Love this note, and it relates so something similar I recently wrote about that looks at information according to its 'usefulness' to meaning making agents rather than its mathematical qualities. There's no meaning without a meaning maker (Chesterton)!

To the core question of the 'thinking' or 'intelligent' machine, I have a working theory which this essay brought to mind (perhaps orthogonal): One of the core challenges with defining "intelligence" is the underlying linguistic framework which conceptualizes language/words in 'atomistic' terms (outlined by Charles Taylor in "The Language Animal"): "language is a collection of independently introduced words." We think that words relate to some external entity or idea (1-1): intelligence is a 'thing' that can be defined and progress measured. But even Turing understood the challenge of defining 'intelligence', which is why he chose to evaluate computer thought in relation to something else (namely, human thought) with his "imitation game." This atomistic view of language enables a way of seeing our reality as a collection of individual objects. Even the notion of the 'passive observer' (which has fueled the sciences/enlightenment - your reference to Descartes and Bacon) gives us a false sense of our tools as standalone entities rather than extensions of ourselves. This draws back to your comment about LLM output being only significant within the context of human beings(!)...they cannot think, but are an extension of human thought (Marshall McLuhan has a whole chapter on this in "Understanding Media" called Automation). The interesting thing about LLMs is that the only reason they (seem to) work is that they are built on a constitutive model of language, rather than an atomistic one (often referred to as connectionist AI vs. symbolic AI).

2 more comments...

No posts

Ready for more?