A “biological neural network” in a petri dish that has reorganized (been trained) to play Pong by means of electrical stimuli is not conscious. A slime mold that moves away from the light and “solves mazes” is also not conscious.
It is also my (relatively uninformed) understanding that a perceptron can’t really approximate a “neuron” outside of being inspired by how neurons in the visual cortex operate. For that, you need a DNN, thus human neurons are orders of magnitude more complex than “artificial neurons” and they only share a name and a slight inspiration.
All of this is just regression based function approximation, in the end, there’s no need to grasp for a ghost in the machine or anything spooky. It’s just statistics, not consciousness.
You say it is not conscious, that's fine. I am asking you to provide evidence why it is not, when conscious life is an emergent system like these systems. I am looking for an argument or a reasoned response about what is different.
Because regression based function approximators can only "fit the data." That's the difference. They are mathematical constructs that do not have experiences, preferences, or any form of sentience. To assume that such architectures can, and potentially do, or that those things could just emerge out of them given enough weights or layers, that's anthropomorphizing the model. Which humans love to do.
Human or animal consciousness is an emergent phenomenon that entails the ability to experience subjective states: emotions, self-awareness, etc.. It is not just about processing information but involves the qualitative experiences and the “what it is like” aspect of being.
When humans or animals feel pain, there is a subjective experience of suffering that is inherently tied to consciousness. The importance we assign to events, objects, or experiences is inherently based on how they impact our conscious experiences. The worth of things big or small is contingent upon the emotions or feelings they evoke in us.
In contrast, a regression-based function approximator does not have preferences, emotions, or experiences.
When you decide to lift your hand, there is a conscious experience involved. You have an intention and a subjective experience associated with that action. On the other hand, a regression-based function approximator does not “decide” anything in the experiential sense. It simply produces outputs based on inputs and pre-training and maybe RLHF that adjusted its weights. There is no intention, no subjective experience, and no consciousness involved.
There is no qualia. To put it simply: a LLM could output some text that makes you "believe" it has preferences, and subjective experiences. But there's nothing there. Just cognitive artifacts of human beings from its corpus. Does an LLM have recursive self-improvement? Does it have self-directed goals? Does it have any of that? No. It's a predictor. LLMs are not sentient. They have no agency. They are not conscious.
It is also my (relatively uninformed) understanding that a perceptron can’t really approximate a “neuron” outside of being inspired by how neurons in the visual cortex operate. For that, you need a DNN, thus human neurons are orders of magnitude more complex than “artificial neurons” and they only share a name and a slight inspiration.
All of this is just regression based function approximation, in the end, there’s no need to grasp for a ghost in the machine or anything spooky. It’s just statistics, not consciousness.