Regarding your response to the possibility of someone having their brain replaced with a look up table, keep in mind that your response basically amount tos behaviourism since you appear to be arguing that regardless of how the replacement functions as long as it takes in the same electrical inputs and gives out the same electrical outputs, you should be considered conscious, which sounds very implausible. Imagine I am replaced by an artificial brain, which is trying to pretend to be me. The brain consciously thinks of itself as trying to deceive others, but it is program such that it sends out the same electrical impulses in order to not give the game away through facial expressions and stuff. There seems nothing ridiculous about this hypothetical, although I expect you could argue that it would be practically difficult to achieve, but the point is that its experiences are quite different to mine so that it doesn’t seem like a safe assumption that just because it is giving you the same electrical output means that it has the same conscious experience. In general, behaviourism strikes, almost all people as very unlikely to be true, and yet that is what your argument appears to effectively amount to considering you concede that replacing the brain with a different device, which functions quite differently, but sends out the same electrical impulses would supposedly lead to the same conscious experiences. Also, even your response to the hypothetical replacement with the look up tables doesn’t seem consistent with the rest of your argument because if the real consciousness is located in the person designing it or whatever it’s still obvious that it does qualitatively change the conscious experience since obviously the person designing the device is not having the same conscious experience that the brain would have been having if it was not replaced. Basically, your response appears to concede that the qualitative experience would change if your brain was replaced with look up tables, even if you argue that it still requires consciousness at some step. This can pretty easily be extended to large language models. Perhaps the consciousness is in the people training these models but not the models. I don’t actually think this is correct and I’m far from certain about AI consciousness, but your argument doesn’t appear to establish the conclusion you’re trying to establish.
I wouldn't call it behaviorism, since I'm fairly confident what I'm arguing for is a form of functionalism.
The artificial brain pretending to be you would have different subjective experiences, but only because it functions differently - it's a different brain trying to deceive others. If that's true, we should be able to observe different behaviors under the right conditions. Otherwise, in what way is it trying to deceive people? We can imagine no differences in behavior occurring for the entire life of this hypothetical person, and they go to their death bed without giving away the secret. In what way would they have committed any deception? Because they thought about it? Maybe the thoughts are really the whole difference, but then we would at least (in principle) be able to observe some difference within the brain itself, at least if we had sufficiently advanced technology.
In regards to the lookup table man, I don't think I've conceded much. As you've described, the consciousness that large language models give rise to could have a wholly unexpected location, but as long as the machinery necessary to recreate the functions of the brain is there, many of the same conscious experiences that we have would also be created by the LLM (or the process of creating it).
I would agree with you that you’re trying to justify a functionalist account, but you response to the look up table man makes me think you have either accidentally stumbled into behaviourism or are pretty close to it because you appear to regard look up table man as being functionally identical to a brain, so your conception of functional identity appears to effectively count everything that would behave identically, regardless of its internal state or the causal mechanism employed as being functionally identical. All functionalists I have read on the topic agree that something which behaves identically to me, but uses a very different mechanism to derive at its behavioural output could potentially not be conscious, and indeed, this would be their response to look up table man. Sure, they would also argue that in actuality the likelihood of him being produced with behaviours that imitate consciousness is very small, unless something conscious was involved in his creation. The way that conscious human text is involved in the creation of a large language model, but if he popped out of existence as a Boltzmann brain, they would simply argue that he is not conscious because his mind is functionally different, even if it gives out the same behavioural output. By contrast, you argue that, although it’s physically possible for this to happen, it’s unlikely, so we should pretend like it’s physically impossible or at least not a useful thought experiment. The reason I argue that you given the game away when it comes to large language models is that if the consciousness in their creation is actually the consciousness of the humans who wrote the text they are trained on, or the consciousness of the people training them, then I think that’s pretty much consistent with the world view of people who don’t consider them conscious. These people would only be incorrect if the consciousness is actually located somewhere other than the humans. To be clear, I don’t mean that to be functionally identical something has to be physically identical. A simulation of my brain would have the same conscious experience because it has the same thought process. Even if the material it’s running on is different but look up table man absolutely does not share my thought process. He just has the same behavioural output.
Regarding your response to the possibility of someone having their brain replaced with a look up table, keep in mind that your response basically amount tos behaviourism since you appear to be arguing that regardless of how the replacement functions as long as it takes in the same electrical inputs and gives out the same electrical outputs, you should be considered conscious, which sounds very implausible. Imagine I am replaced by an artificial brain, which is trying to pretend to be me. The brain consciously thinks of itself as trying to deceive others, but it is program such that it sends out the same electrical impulses in order to not give the game away through facial expressions and stuff. There seems nothing ridiculous about this hypothetical, although I expect you could argue that it would be practically difficult to achieve, but the point is that its experiences are quite different to mine so that it doesn’t seem like a safe assumption that just because it is giving you the same electrical output means that it has the same conscious experience. In general, behaviourism strikes, almost all people as very unlikely to be true, and yet that is what your argument appears to effectively amount to considering you concede that replacing the brain with a different device, which functions quite differently, but sends out the same electrical impulses would supposedly lead to the same conscious experiences. Also, even your response to the hypothetical replacement with the look up tables doesn’t seem consistent with the rest of your argument because if the real consciousness is located in the person designing it or whatever it’s still obvious that it does qualitatively change the conscious experience since obviously the person designing the device is not having the same conscious experience that the brain would have been having if it was not replaced. Basically, your response appears to concede that the qualitative experience would change if your brain was replaced with look up tables, even if you argue that it still requires consciousness at some step. This can pretty easily be extended to large language models. Perhaps the consciousness is in the people training these models but not the models. I don’t actually think this is correct and I’m far from certain about AI consciousness, but your argument doesn’t appear to establish the conclusion you’re trying to establish.
I wouldn't call it behaviorism, since I'm fairly confident what I'm arguing for is a form of functionalism.
The artificial brain pretending to be you would have different subjective experiences, but only because it functions differently - it's a different brain trying to deceive others. If that's true, we should be able to observe different behaviors under the right conditions. Otherwise, in what way is it trying to deceive people? We can imagine no differences in behavior occurring for the entire life of this hypothetical person, and they go to their death bed without giving away the secret. In what way would they have committed any deception? Because they thought about it? Maybe the thoughts are really the whole difference, but then we would at least (in principle) be able to observe some difference within the brain itself, at least if we had sufficiently advanced technology.
In regards to the lookup table man, I don't think I've conceded much. As you've described, the consciousness that large language models give rise to could have a wholly unexpected location, but as long as the machinery necessary to recreate the functions of the brain is there, many of the same conscious experiences that we have would also be created by the LLM (or the process of creating it).
I would agree with you that you’re trying to justify a functionalist account, but you response to the look up table man makes me think you have either accidentally stumbled into behaviourism or are pretty close to it because you appear to regard look up table man as being functionally identical to a brain, so your conception of functional identity appears to effectively count everything that would behave identically, regardless of its internal state or the causal mechanism employed as being functionally identical. All functionalists I have read on the topic agree that something which behaves identically to me, but uses a very different mechanism to derive at its behavioural output could potentially not be conscious, and indeed, this would be their response to look up table man. Sure, they would also argue that in actuality the likelihood of him being produced with behaviours that imitate consciousness is very small, unless something conscious was involved in his creation. The way that conscious human text is involved in the creation of a large language model, but if he popped out of existence as a Boltzmann brain, they would simply argue that he is not conscious because his mind is functionally different, even if it gives out the same behavioural output. By contrast, you argue that, although it’s physically possible for this to happen, it’s unlikely, so we should pretend like it’s physically impossible or at least not a useful thought experiment. The reason I argue that you given the game away when it comes to large language models is that if the consciousness in their creation is actually the consciousness of the humans who wrote the text they are trained on, or the consciousness of the people training them, then I think that’s pretty much consistent with the world view of people who don’t consider them conscious. These people would only be incorrect if the consciousness is actually located somewhere other than the humans. To be clear, I don’t mean that to be functionally identical something has to be physically identical. A simulation of my brain would have the same conscious experience because it has the same thought process. Even if the material it’s running on is different but look up table man absolutely does not share my thought process. He just has the same behavioural output.