The takeaway seems to be "Only meat brains can be conscious because I can feel it and computers aren't made of meat". Which is basically the plot line of every human/robot movie for the last 80 years.
This is kind of self-contradictory. Then humans aren't conscious? Or each has their own consciousness? Then why not the machine? Not sure what's the point being made here. Yes, the states of a human brain and a transformer are absolutely incompatible (humans at least share the common architecture), that's why any attempts to map model's "emotions" to humans' and the entire model welfare concept are pretty dubious. That doesn't prove there's no (or can never be) consciousness in that, though.
That's the most coherent argument from the entire article. It criticizes the Butlin report in particular and extrapolates that to "never", while ignoring modern takes on that (e.g. interpretability studies showing vague similarity of both on a level deeper than just the language) and any possible future evidence.
In a sense the title is right, nobody ever formally defined consciousness, so you and I and anyone else are free to make almost any argument and spin any narrative according to our beliefs and it will be true! Ill-defined terms and baseless solipsism are the main problems with all these discussions. Good thing that in practice they matter as much as the question whether a submarine swims.
Anyway, I plan on posting it online somewhere eventually, but HN seems like a good place to throw the introduction out there.
The basic argument I have is that consciousness is a red herring, a concept that was relevant historically but is increasingly routed around by cybernetic systems that aren’t interested in interior states.
Here’s the intro. If you find this interesting, please let me know!
MacGuffin. Whodunit. Smoking gun. Fall guy. The detective fiction genre is an underappreciated source of terminology for unsolved problems, useful not only for criminal mysteries, but also for unanswered questions in philosophy and science. One such term is the red herring: an apparently useful thing, that upon further inspection, is actually a distraction from solving the main mystery at hand.
The concept of consciousness may be such a red herring. It has occupied the minds of philosophers for centuries and increasingly frames debates around AI, animal rights, and medical ethics, among other issues. And yet, even as consciousness is rhetorically dominant, in practice it is increasingly ignored and routed around in real-world situations. When rights are bestowed and resources allocated, the mechanism by which these are done is increasingly uninterested in interior consciousness.
This is not because the problem of consciousness has been solved, or because a revolutionary new theory has novel insights. Rather, it is the natural consequence of cybernetic systems concerned only with output, not internal states or abstract ideals.
What is needed, then, is a genealogy of the concept of consciousness, in the manner of Nietzsche, Foucault, or Charles Taylor. Not a new theory of consciousness, but a story of how the concept developed and came to underlie significant legal, moral, and philosophical systems, and how that foundation is rapidly fading away.
What this genealogy reveals is not merely the history of a single concept or the changing of societal systems, but a deeper human shift: the erosion of interiority itself and the triumph of the external. In simpler terms: a new, largely exterior idea of the self is forming, while at the same time, it is becoming more difficult to conceive of an interior-focused one.
This essay will trace the history of the concept of consciousness, show how it is being routed around by output-focused systems, then ask what effect this has on human life, and how to address it.
Somebody with another background might take on commenting the article, so instead of short comments here we might have a coherent picture.
I explored a related angle on how AI challenges our assumptions about self and awareness.
https://www.immaculateconstellation.info/why-ai-challenges-u...
He instead seems to make up a mental image of how a neural network might work on a computer and uses that representation instead.
In any case, intelligence, consciousness, sapience, ego, etc. will probably need more strict fact-based definitions before we can agree on whether or not artificial consciousness can exist.
My personal theory is that consciousness is a specific biological adaptation, and it exists primarily to manage the care of young, and to manage status & relationships in kin groups. A theory of mind can benefit the care of young, which is a good argument for why it appears that only mammals and birds (two classes of animals which do a lot of caring for young) appear to either have a prefrontal cortex (mammals) or appear to have developed something which performs the same functions. (birds) In my opinion, consciousness as people experience it is also necessary for developing a theory of mind for other people, which is beneficial with regard to understand status & hierarchy in a group, and for cultivating and maintaining status.
This is partially why you can be a mystery to yourself; the same skills you'd use to try to understand someone else must actually be used to understand yourself. eg: "was I secretly jealous when I cut down my coworker?" Why don't you just know with 100% certainty? I'd argue that it's because the maintenance of ego does not require this certainty, because ego is tacked onto an already developed brain and lacks perfect insight into the brain's processes. I'd also argue this is why there can be such a gap between who someone believes themselves to be, and who they actually are. You're maintaining a personal identity which ties directly to status. It's not super relevant whether you're consistent over time or 100% internally consistent. You must meet the threshold to maintain your status, but really no more is needed.
It's also why you talk yourself in inane ways. You're walking through your house and you finally found your lost car keys. "I found them!" you might say to yourself. But who are you telling? Certainly "you" already know. I'd argue that the "you" in your head is an abstract identity that you have imperfect access to -- just the same as you have imperfect access and knowledge to other people. Your mind builds a model of your own mind using the same tools it uses to build a model of other people's minds. You have _more_ information about your own mind, but you certainly do not have omniscience about your own mind. The models are always imperfect.
I could go on, but I'd also argue this is sort of the basis for religion. Just like we see faces in the clouds, we try to find a theory of mind in places where it doesn't actually exist. (eg: "We must have upset an ego out there, and that's why it's not raining.") I also think it's why people have moral intuitions but not mathematical intuitions. Or why moral intuitions fail at scale. (eg: Peter Singer's famous child drowning in a small pond thought experiment.)
Will AI as a general concept ever achieve human level cognition and sentience? Depends on your definition of "ever".
Anyone who tries to feed you a line about "never" doesn't understand what they're talking about. On almost any topic.
AI as a concept is never going away and if we keep working the problem, we will eventually achieve a sentient AI. There's nothing magical about meat, there's only things that we don't understand.
To assert that only a human meat brain can be conscious is to assert that only humans can be conscious. That excludes alien life for one, and a large fraction of terrestrial life. One can argue quite successfully that many terrestrial species are conscious and aware. Elephants, great apes, whales, dolphins, octopi, pigs, corvids.
If an octopus is conscious (and I have good reason to believe they are) why is it so ridiculous to think that a hunk of silicon can do it?
Humans really are not special. We're just animals like any other. Our brains are not cosmically blessed and unique. There is no magic.
Whether AI needs consciousness is a totally separate question. LLMs are the great Chinese room, I'd say they have unconscious understanding, the distinction is like c vs list and similarly meaningless but may become meaningful in a constrained self-learning robotics context.
AI will never need to be conscious, AI isn't a moth flying to an open flame, but people will try anyway