Discussion about this post

User's avatar
Ali Afroz's avatar

Regarding your response to the possibility of someone having their brain replaced with a look up table, keep in mind that your response basically amount tos behaviourism since you appear to be arguing that regardless of how the replacement functions as long as it takes in the same electrical inputs and gives out the same electrical outputs, you should be considered conscious, which sounds very implausible. Imagine I am replaced by an artificial brain, which is trying to pretend to be me. The brain consciously thinks of itself as trying to deceive others, but it is program such that it sends out the same electrical impulses in order to not give the game away through facial expressions and stuff. There seems nothing ridiculous about this hypothetical, although I expect you could argue that it would be practically difficult to achieve, but the point is that its experiences are quite different to mine so that it doesn’t seem like a safe assumption that just because it is giving you the same electrical output means that it has the same conscious experience. In general, behaviourism strikes, almost all people as very unlikely to be true, and yet that is what your argument appears to effectively amount to considering you concede that replacing the brain with a different device, which functions quite differently, but sends out the same electrical impulses would supposedly lead to the same conscious experiences. Also, even your response to the hypothetical replacement with the look up tables doesn’t seem consistent with the rest of your argument because if the real consciousness is located in the person designing it or whatever it’s still obvious that it does qualitatively change the conscious experience since obviously the person designing the device is not having the same conscious experience that the brain would have been having if it was not replaced. Basically, your response appears to concede that the qualitative experience would change if your brain was replaced with look up tables, even if you argue that it still requires consciousness at some step. This can pretty easily be extended to large language models. Perhaps the consciousness is in the people training these models but not the models. I don’t actually think this is correct and I’m far from certain about AI consciousness, but your argument doesn’t appear to establish the conclusion you’re trying to establish.

2 more comments...

No posts

Ready for more?