You might object that this is a verbal trick, that I’m arguing that AI will become conscious because we’ll start using the word “conscious” to include it. But there is no trick. There is always a feedback loop between our theories and the world, so that our concepts are shaped by what we discover.
As we interact with increasingly sophisticated AI, we will develop a better and more inclusive conception of consciousness.
Consider the atom. For centuries, our concept of the atom was rooted in an ancient Greek notion of indivisible units of reality. As late as the 19th century, physicists like John Dalton still conceived of atoms as solid, indivisible spheres. But after the discovery of the electron in 1897 and the discovery of the atomic nucleus in 1911, there was a revision of the concept of the atom — from an indivisible entity to a decomposable one, a miniature solar system with electrons orbiting a nucleus. And with further discoveries came further conceptual revisions, leading to our current complex quantum-mechanical models of the atom.
These were not mere semantic changes. Our understanding of the atom improved with our interaction with the world. So too our understanding of consciousness will improve with our interaction with increasingly sophisticated AI.
Sceptics might challenge this analogy. They will argue that the Greeks were wrong about the nature of the atom, but that we aren’t wrong about the nature of consciousness because we know firsthand what consciousness is: inner subjective experience. A chatbot, sceptics will insist, can report feeling happy or sad, but only because such phrases are part of its training data. It will never know what happiness and sadness feel like.
But what does it mean to know what sadness feels like? And how do we know that it is something a digital consciousness can never experience? We may think — and indeed, we have been taught to think — that we humans have direct insight into our inner world, insight unmediated by concepts that we have learned. Yet after learning from Shakespeare how the sorrow of parting can be sweet, we discover new dimensions in our own experience. Much of what we “feel” is taught to us.
A chatbot, sceptics will insist, can report feeling happy or sad, but only because such phrases are part of its training data. It will never know what happiness and sadness feel like. But what does it mean to know what sadness feels like?
The philosopher Susan Schneider has argued that we would have reason to deem AI conscious if a computer system, without being trained on any data about consciousness, reports that it has inner subjective experiences of the world. Perhaps this would indicate consciousness in an AI system. But it’s a high bar, one that we humans would probably not pass. We, too, are trained.
Some worry that if AI becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.
Just as AI has prompted us to see certain features of human intelligence as less valuable than we thought (like rote information retrieval and raw speed), so too will AI consciousness prompt us to conclude that not all forms of consciousness warrant moral consideration. Or rather, it will reinforce the view that many already seem to hold: that not all forms of consciousness are as morally valuable as our own.
This article originally appeared in The New York Times.
Written by: Barbara Gail Montero
©2025 THE NEW YORK TIMES


