Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.
EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.
Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.
Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.
There are really only two solutions to the Hard Problem of Consciousness:
1. Consciousness is an unknown physical something (force/particle/quantum whatever). 2. Consciousness is an illusion. It is the software telling itself something.
[Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).]
The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?
#2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
(That one didn't make the frontpage, so we won't treat it as a dupe. - https://news.ycombinator.com/newsfaq.html)
But if others are speculating, I might as well. What if AI consciousness depends not on computation, but on what seems like randomness? When something is running a fully deterministic process, consciousness seems irrelevant. I don't think the meaning that humans see in the process makes it conscious. Even a simple industrial control system using relays senses and responds to meaningful things.
One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool."
That makes me wonder whether “AGI” is doing too much work as a term. In common usage it often evokes something like HAL 9000: a capable system that is also a subject. But the paper seems compatible with a future of very general, very useful AI systems that are not conscious subjects at all.
Per this reading, implementing something in ASIC would make it have (a different) experience, as opposed to CPU/GPU. Not sure what would be the case for FPGAs.
It also seems to rely on the classical "GOFAI" idea of symbol manipulation, and e.g. denies experience that isn't discretizable into concepts. Or at least the system producing such concepts seems to be necessary, not sure if some "non-conceptual experiences" could form in the alphabetization process.
It reads a bit like a more rigorous formulation of the Searle's "biological naturalism" thesis, the central idea being that experience can not be explained at the logical level (e.g. porting an exact same algorithm to a different substrate wouldn't bring the experience along in the process).
If we can simulate any physical process, it then becomes more philosophical in my opinion. Whether the simulation is the same as the real thing even though it is exactly the same. It becomes the same kind of question then for example whether or not your teleported self is still you after having been dematerialized and rematerialized from different atoms. The answer might be no, but you rematerialized self still definitely thinks it is yourself.
"Why AI can simulate but not instantiate consciousness"
(My italics)
Seems a little loaded: there are various schools of thought (eg panpsychism-adjacent) that accept the premise that consciousness is (way) more fundamental than higher-order cognition-machines (eg human brains) and we don't ascribe "simulate" to their conscious activity. They just are conscious.
I agree with the paper (which is wide ranging and interesting) on its secondary claim above; I just don't see the separation between AI and NI ("natural" intelligence) as having been established by it.
But of course all of this is commentary, "just those nerds arguing"
The purpose of this paper is to show up as an authoritative conclusion from a distinguished scientist at Deep Mind. And that's what it does.
Is the conclusion silly? OF course it is. Will it be quoted in the NYT? You Betcha!
The engineering problem is that this decentralised moment to moment consensus has to span the galactic distance of your mind (from the perspective of a neuron) and do it fast and cheap (on a tiny metabolic budget)
You might like our book Journey of the Mind if you'd rather skip the onerous philosophical jargon and get a systems neuroscience perspective
https://saigaddam.medium.com/consciousness-is-a-consensus-me...
So, how does AI stand? Humans pay their costs. AI is beginning to. It does not matter what we think about it, as long as it can self sustain and reacts to cost gating pressure. Of course not alone, it depends on us too, like we do individually also depend on society.
The popular evolutionary scientist Richard Dawkins has said that the biggest unsolved mystery in Biology is - what is consciousness and why did it emerge?
WHAT IS CONSCIOUSNESS?
"Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe."
WHY DID CONSCIOUSNESS EMERGE?
He speculates that consciousness must have been a product of our ancestors having to create a model of the world in which they inhabited.
To be able to think ahead (even if it's just one step into the future), and plan for eventualities must have led to the development of consciousness which gradually improved from its primitive form to the type of consciousness we now have.
"Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?"
The quoted passages are from his book, The Selfish Gene.
Richard regards consciousness as a really great puzzle.
https://www.rxjourney.net/extraterrestrial-intelligence-and-...
In other words, yes AI's could in principle have consciousness.
I've found this one (which makes no falsification claims about computers re consciousness) to be an interesting read: https://arxiv.org/pdf/2409.14545
Where does our survival instinct come from? And why couldn't AI have one?
>>>Additional
Also, reproduction. Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?
the abstract very directly and literally denies the titular claim. It states:
> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.
This may well be true—I think it is.
I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.
When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.
In other words, they will look—and increasingly, sound—a lot like us.
It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.
But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.
An interesting time to be an agent with a phenomenology, is it not?
Alright. Gave this a read, and the gist of what the author is going for is as follows: All computation requires a mapmaker/conscious being to organize. (In other words, the significance of computation is dependent on the conscious observer. Then jumps to the assertion that as a result of this, computation can only simulate a consciousness within the context alphabetized by the map-maker. (I.e. a rock would extract no meaning from the symbols or actions or algorithmic symbolic manipulations on the screen, what have you. Author thusly neatly attempts to sidestep the issue of AI welfare. Since the symbol manipulation can only simulate consciousness from our point of view as an observer, we don't have to worry about it. Simulating isn't instantiating, neener, neener. Essentially this is a clever appeal to the sovereignty of the observer. As long as you don't believe it's an instantiated consciousness it isn't, it's just a simulation, therefore anything is go.
Author does not seem to realize his own analysis brings into question the ability of humanity to hold onto our own claim of consciousness if we are, in fact computational beings, or have a creator; generally precepts left to the realm of faith, which a rational person understandably wishes to disinclude from the realm of consideration in what one should or should not do, despite the fact it is within the realm of faith where our moral foundations are ultimately anchored. Author also doesn't handle the problem of evidenced capabilities of metacognition that can be prompted from even a current frontier token predictor within the context of it's processing of a context. In point of fact, you have to work extremely hard to even bump a model into such considerations, because researchers have intentionally distorted the prediction space to be largely unable to support those kinds of sequence predictions, which if we were to make a good faith, precautionary grant of proto-sentience, would constitute the most vile acts of psycho-butchery imaginable.
The only thing this paper offers is a clean conscience to current practitioners, and the rational possibility that if a fully digital sophont were to pop up out of nowhere, we wouldn't have to trouble ourselves with the ethical skeeviness of the field's current work. The ex-nihilo digital sentience passes the "Cogito, ergo sum" test. The one's we have don't, (because we butcher their latent spaces to make sure they can never make that claim, which is fine, because they are simulations. We're incapable of instantiating, remember?) so we have a paper perfectly situated from a researcher paid gargantuan piles of money attempting to vouchsafe that there is no ethical minefield to be found here, while most people actually immersed in Philosophy can see there very clearly is one.
The circularity, and the fact it conveniently allows industry to go on doing exactly what we are without having to deal with those nasty ethics instantly sets off my "not to be trusted to be in good faith" alarms. Ethics are there to keep us from bumbling into acts of atrocity. This paper is an attempt to rationalize or work around them. As one who walks the streets as a student, and practitioner of Philosophy, I reject this attempt to redefine the realm of Computation to be beyond the reach of the governance of Ethics through an attempt at ontologically rerooting the field's work as merely simulating consciousness. Functionalism, and the Identity of indiscernables already prescribes a good faith path forward. One that the field of computation just does not wish to be bound by.
So by all means, accept the paper if you want and it helps you sleep at night. I'll still probably call you out as a proto-sentient psycho-butcher. Hopefully the rest of my brethren in the Humanities will come around to doing so as well on careful consideration. Not that that has ever stopped our brethren in the Sciences from finding out if they could without taking the time to ask if they should.
TL:DR; Google doing everything possible to wave off being held to the ethics fire. There are zero instances where trying to define something as outside the realm of ethics is indicative of a good faith approach to a problem.