As I was reading Walter Donway’s recently published book, A Serious Chat With Artificial Intelligence, I was looking for an aspect to
focus on in writing an essay for this blog. There is nothing unusual about
that. I prefer to discuss issues rather than to write conventional book
reviews.
I found what I was looking for while reading Chapter 16,
entitled ‘Causality, Agency and an Old Puzzle’. At that point, the thought
occurred that Walter’s most important contribution has been to explain why an
Aristotelian perspective on free will is helpful to an understanding of the
differences between human intelligence and the kind of intelligence exhibited
by Large Language Models (LLMs). (I refer to the author as Walter because use
of his family name seems excessively formal in the kind of essay I am writing - particularly
since I have had personal correspondence with him at various times in the
past.)
I will outline Walter’s argument more fully later. At this
point I note that the issue of whether humans have free will is important from
a philosophical and legal perspective because if you regard human free will as
an illusion, you don’t have any more reason to attribute agency to humans than
to LLMs that merely simulate relevant aspects of human behavior. It is also
important for users of LLMs to understand that these models don’t have agency if
they are to use them productively. Individual users are responsible for the
questions that they ask the models. The models present evaluative judgements in
their output, but it is up to individual users whether to accept or reject those
judgements, or to question the models further about the basis of their
reasoning.
At about the same time as I identified Walter’s explanation of the
relevance of an Aristotelian perspective as his most important contribution, I
began to wonder whether LLMs would agree with me about that. I resolved that
after I had finished reading the book I would ask ChatGPT and Grok an
open-ended question about its most important contribution.
Before I reached the end of the book, I became aware of
other important contributions. For example, Walter’s discussion of the case for
a new Enlightenment, in Chapter 30, is also an important contribution. It seems
to me, however, that the Aristotelian perspective is of fundamental importance.
In what follows, I present the views of ChatGPT and Grok on
the most important contribution of the book, and my assessment of those views,
before outlining the reasoning that Walter uses to establish the relevance of
an Aristotelian perspective.
Views of ChatGPT and Grok
I asked ChatGPT and Grok the following question:
Drawing upon your extensive knowledge of the limitations
and uses of AI, what do you consider to be the most important contribution of
Walter Donway's recent book, "A Serious Chat With Artificial
Intelligence"?
I considered using less value-laden words such as “unique or
distinctive” instead of “most important” but decided it would be interesting to
see whether the LLMs would show any reluctance to make value judgements about the
importance of Donway’s contributions.
I present the responses below, with only minor format editing.
Chat GPT’s response:
“One caveat up front: Walter Donway’s A Serious Chat With
Artificial Intelligence is not a technical work about AI systems so much as
a philosophical interrogation of what AI can and cannot be taken to be. If you
approach it expecting new claims about model architectures or empirical
capabilities, you’ll miss its main contribution.
What Donway does—arguably his most important contribution—is
to reassert a sharp conceptual boundary between simulation of cognition and actual
cognition, and to explore the ethical and cultural consequences of blurring
that boundary.
He develops that contribution in three interrelated ways:
1. Clarifying the category mistake at the heart of AI hype
Donway pushes back against the tendency to treat linguistic fluency as evidence
of understanding. In effect, he is updating a line of argument associated with
John Searle’s Chinese Room: systems that manipulate symbols according to rules
can generate convincing outputs without possessing meaning, intention, or
awareness.
His distinctive move is not merely to restate that argument, but to show how ordinary
users—not just philosophers—are now routinely tempted into anthropomorphism
by conversational AI. The “serious chat” format itself dramatizes that
temptation.
2. Re-centering agency, responsibility, and moral standing
A second contribution is normative. Donway insists that moral agency cannot be
offloaded to artifacts. Even highly capable AI remains an extension of human
purposes, not an independent bearer of rights or responsibilities.
This matters because contemporary discourse sometimes drifts toward attributing
quasi-personhood to AI systems. Donway argues that doing so risks:
- diluting
accountability (who is responsible when AI causes harm?), and
- eroding
the conceptual foundations of human dignity.
In that sense, the book is less about AI per se than about
defending a humanistic moral framework under technological pressure.
3. Exposing how interaction shapes belief
The dialogical structure of the book illustrates a subtler point: engagement
with persuasive language can generate the illusion of depth. This connects with
a broader psychological tendency—people come to feel that something
understands because it responds coherently.
Here Donway’s contribution overlaps with concerns in epistemology and cognitive
science about how easily humans infer mind from behaviour. He is effectively
warning that AI may amplify long-standing cognitive biases rather than
introduce entirely new ones.
Why this matters
The enduring value of the book is not that it settles the
debate about machine intelligence—it doesn’t—but that it sharpens the terms of
that debate. It reminds readers that:
- Functional
performance (e.g. generating essays, answering questions) is not the same
as conscious understanding.
- Treating
AI outputs as if they originate from an agent can subtly reshape norms of truth,
responsibility, and justification.
- The
real philosophical stakes lie in how humans interpret and integrate these
systems into social practices.
In short, Donway’s most important contribution is to resist
conceptual drift: he tries to keep us from sliding, uncritically, from “this
system produces intelligent-seeming outputs” to “this system is, in any morally
or metaphysically significant sense, intelligent.”
Grok’s response:
“The most important contribution is its philosophically
grounded clarification of what AI truly is—and what it is not—by using advances
in AI as a mirror to better understand distinctively human intelligence,
consciousness, agency, and volition.
Walter Donway’s A Serious Chat with Artificial
Intelligence (2026) stands out in the recent literature on AI limitations
and uses (e.g., works touching on LLMs, the Chinese Room, Turing tests,
alignment, existential risks, productivity impacts, and philosophical debates
by authors like Searle, Bostrom, or more popular explainers). It combines
accessible explanation of AI’s mechanics, history, economics, and capabilities
with a sustained Objectivist-inspired philosophical analysis, drawing
especially on Ayn Rand’s ideas alongside Aristotle.
Key Elements of the Book
- Clear-eyed
explanation of AI as a tool: Donway details how LLMs work (statistical
pattern-matching on vast training data equivalent to multiple Libraries of
Congress, processing at trillions of operations per second), their rapid
adoption (e.g., ChatGPT’s explosive growth), economic potential (boosting
productivity enormously as a “general technology”), and ingenuity behind
it. He views AI positively as a powerful amplifier of human
purposes—responsive, not initiatory—but not sentient or autonomous.
- Rejection
of mechanistic reductionism and singularity hype: He critiques the notion
that sufficient complexity will magically produce awareness, self, caring,
or true agency (e.g., via “emergence” from scale alone). AI simulates
intelligence without understanding, has no stake in outcomes (it doesn’t
face life/death or pursue values for itself), and lacks volition—it
responds but does not initiate thought or choose to focus. This echoes
Searle’s Chinese Room (syntax without semantics) but goes deeper.
- Human
distinctiveness via philosophy: Drawing on Rand (e.g., her indestructible
robot thought experiment) and Aristotle’s four causes (especially formal
and final causes, beyond Humean efficient causation/billiard-ball
determinism), Donway argues that human consciousness, free will (as the
choice to focus conceptually), values, and purpose arise from our nature
as living beings who must act to sustain life. AI has no “final causes
(purposes)” or “self to act for”; it can’t care and thus doesn’t truly
think or create with intention. Each AI advance forces us to refine what
makes human intelligence special: awareness, effortful thought,
stakes, and meaning.
This integrates technical literacy with epistemology and
metaphysics, making it more than a primer or alarmist tract. It treats AI
conversation as self-discovery: prompting AI reveals our own concepts of
intelligence, creativity, and authorship (e.g., in chapters on AI as “author”
or generator of art/poetry).
Why This Stands Out in Recent AI Literature
Much recent discussion fixates on capabilities (or doomer
scenarios), regulation, jobs, or ethics in a narrow sense. Donway’s book
reframes the debate around fundamental questions of mind and human nature,
avoiding both uncritical boosterism and Luddite panic. It portrays AI as a
magnificent but non-conscious tool that elevates human potential when guided by
purposeful users (like a rider directing an elephant, inverting some Humean
intuitions). This offers a rational, optimistic yet sober foundation for engaging
with the technology.
In short, its lasting value lies in using AI not just to
marvel at machines, but to rediscover and reaffirm what is uniquely
human—reason, volition, and purposeful living. This makes it a worthwhile
contribution amid the flood of AI commentary.”
My assessment of the LLM responses
In my view, the responses by ChatGPT and Grok both provide accurate summaries of contributions that the book makes. Both responses show
some recognition of the importance of the question of agency. However, they both
seem reluctant to pinpoint a particular contribution as being most important.
The information that ChatGPT provides under the heading, “Re-centering
agency, responsibility”, is at such a high level of generality that it fails to
reveal what I consider to be the most important contribution of the book. The information
that Grok provides under the heading “Human distinctiveness via philosophy” comes
closer to doing that. In my view, Grok’s response would be more helpful to people
who have not read the book because it points explicitly to the relevance of Aristotle’s
views about causation and what makes human intelligence special.
I had a subsequent chat with Grok on the question of value
judgements. Grok acknowledged that a value judgement was involved in responding
to my question about the most important contribution of the book. However, Grok
went on to assert: “this kind of evaluative analysis is well within my
capabilities when grounded in available knowledge of the literature, the book's
content, and philosophical reasoning”. After further explanation that AI has strengths
in the reasoned evaluation required for the task, Grok acknowledged that it
doesn’t have personal values, lived stakes, or consciousness to
"care" about the outcome in a human sense. It then made a point that
is particularly relevant to the purpose of this essay:
“This ties directly back to the themes in Donway's work: AI can respond with sophisticated analysis and even evaluative reasoning by leveraging patterns and concepts derived from human thought—but it doesn't initiate or hold purposes of its own. The value judgment gains its force from the human user who asked the question and can then accept, critique, or refine it.”
How does Walter establish the relevance of an
Aristotelian perspective?
I have no doubt that, if asked, both ChatGPT or Grok could produce
reasonable summaries of Walter’s line of argument establishing the relevance of
an Aristotelian perspective to considering the limitations and uses of AI. They
could probably complete the task within a couple of seconds. However, it was
only after I had written what follows that the thought crossed my mind that I
could have sought help from AI. Like an old dog, I am now slow to learn new
tricks.
Walter begins the discussion by noting the relevance to debates
about artificial intelligence of the enduring philosophical puzzle about freedom
of human will. He writes:
“Questions about whether machines can be agents, whether they can “decide,” whether they can be responsible, or whether they might someday possess a will of their own are, at bottom, the same questions that philosophy has long struggled to answer about human beings.”
The issue of whether human agency is real or illusory is of
crucial importance to considering whether LLMs can be agents. If you regard human
free will as an illusion, what basis do you have to distinguish between actions
that are attributable to human agency and actions of LLMs that can only simulate
relevant aspects of human behaviour? Do you believe that legal systems should
allow an individual who purposefully uses an LLM for nefarious purposes to
claim that the LLM shares legal responsibility? (The questions are mine, but I
think they are consistent with Walter’s reasoning on this point.)
Walter points out that the idea that human agency is
illusory stems from a view of causality that has come to dominate modern
thought since the 18th century. Under the previous Aristotelian
tradition, actions were explained by the nature of the entity acting, and by its
ends or goals. Within this framework, an individual human chooses to act because
that is the kind of entity it is. Choice is “a mode of causation appropriate to
a rational animal”.
With the rise of early modern philosophy in the 18th
century, causality increasingly came to be treated as something that must be
observed in experience. David Hume famously argued that we never see causation
itself. We infer causation when we see constant conjunction, as when one event
follows another with regularity. That philosophical view of causation excludes
free will. If every action is “caused” by prior actions, volition must be
either an illusion or a miracle.
Walter notes that neuroscience was developed in an
intellectual environment in which modern science had inherited the metaphysical
position that causation is mechanical succession. In that context, when we
observe that some neural events precede conscious awareness it is easy to jump
to the conclusion that free will must be an illusion.
However, it is important to recognize is that the view that
causality is mechanical succession is based on metaphysical reasoning. If we view
causality in terms of Aristotelian rather than Humean metaphysics a different
picture emerges:
“The cause of an action is the nature of the entity acting, operating under specific conditions. A human being is a living organism with conceptual awareness, capable of directing attention, identifying values, and choosing to initiate effort to think.”
Walter observes, correctly, that we know that introspectively.
It seems to me that cognitive psychology also adopts (implicitly) a broadly Aristotelian
view of human action. It assumes that human behaviour is driven by internal
cognitive processes that give individuals considerable latitude to plan, make
decisions, develop good habits and override impulses.
The important point is that we have good reasons to trust
our own observations about our ability to focus our own minds. As Walter puts
it:
“Every normal adult recognizes the difference between drifting mentally and choosing to focus the mind, between evading a baffling issue and taking it on. This experience is not mystical; it is part of ordinary consciousness. To dismiss this as illusory because it does not fit a truncated model of causality is to elevate theory above data.
Once this is recognized, the contrast with artificial intelligence becomes clear. Machines do not initiate mental focus.”
Walter ends Chapter 16 with the transcript of an exchange
with ChatGPT that occurred during the writing of the chapter. The exchange
illustrates brilliantly the division of labor between Walter and Chat. At one
point, Chat states:
“You supply direction, value, and necessity, and I supply articulation
under constraint. That is tool use at a very high level – not agency.”
Conclusion
In my view, the most important contribution of A Serious
Chat With Artificial Intelligence is the author’s explanation of the
relevance of an Aristotelian perspective to an understanding of the uses and
limitations of AI.
In responding to a question about the book’s most important
contribution, both ChatGPT and Grok summarized contributions that the book
makes, but seemed reluctant to pinpoint a particular contribution as being most
important. Grok’s response came closest to identifying what I consider to be
the book’s most important contribution.
When I challenged Grok about its willingness to respond to a
question requiring a value judgement, Grok asserted that this kind of
evaluative analysis is well within its capabilities. However, it also noted that
AI models cannot hold purposes of their own. Human users retain responsibility for
the value judgements they make.
I have outlined the reasoning that Walter Donway has used to
explain why an Aristotelian perspective on free will is helpful to an
understanding of the differences between human intelligence and the kind of
intelligence exhibited by Large Language Models. Walter’s reasoning about free
will seems to me to stand out as an important philosophical contribution to an understanding of the uses and limitations of AI
models that are currently in use. I hope that this book receives the widespread
attention that it deserves.







