Wednesday, October 08, 2025

Big Bots Don't Cry

A few comments to last week's post Computers Don't Want suggested that human brains are just advanced computers, yet still possess agency and desires. But are we just Turing machines? I wrote about this question before but let's revisit in the world where artificial general and super intelligence may (or may not) be right around the corner. 

Much of what our brain does, the way we store and retrieve our memories, how we process our senses, how we reason and learn, are very much computational. There is something else, something that gives us an internal view of ourselves, that combined with the computational power of the brain leads to self-awareness, agency, free will, emotions and desires. When I see attempts to give computational explanations for our internal concepts, like the Blums' work on consciousness and free will, I see them capturing the properties we attribute to these concepts, but fail to capture the intuitive notion I have of them. I have some internal capability, a "soul" for the lack of a better name, that allows us to reason about ourselves.

I think of René Descartes and his Meditations on First Philosophy, famous for cogito ergo sum (I think, therefore I am) in the second meditation. Computation and even its execution are mathematical concepts and mathematics lies outside of any physical world. You can't reason from a computational object that it exists. Yet Descartes is able to reason about his own existence. In the sixth meditation, Descartes talks about substance dualism, a separation from mind and body. I view the body as containing the brain, the computational part of our being, but some other entity which, combined with our powerful computational brain, enables us to reason about ourselves and gives us free will, emotions and desires. 

I met a religion professor and asked him about this topic. He mentioned that he had a crying baby sitting next to him on the plane to Chicago. Babies cry for many reasons but sometimes just to be held by their mother or father, a need for companionship. I can imagine a computer doing the equivalent of crying but not the need to do so.

I can't explain how the soul interacts with our brains, I suspect it goes beyond some simple physical mechanism. I can't prove that I have a soul, and while I can't prove the rest of humanity also has souls, I believe they do since otherwise we wouldn't even have concepts like self-awareness. But I don't believe AI models, now or in the future, have something like a soul, and we shouldn't reason about them as though they do.

8 comments:

  1. What do you propose for "goes beyond some simple physical mechanism" and, more importantly, why is this mechanism unique to humans (which have been around <3 mil yrs) and not generalized to all organisms (have been around >2 bil yrs)?

    ReplyDelete
    Replies
    1. I don't have a great answer to the first question, to even call it a "mechanism" suggests it's a computational thing. For the second, it's the "soul" combined with our higher brain activity that gives us the capability to reason about ourselves.

      Delete
  2. It is so gratifying to read this. A quick question: you are embracing dualism (i.e., distinguishing between the physical and the abstract/mathematical). The brain is physical, the soul is not. But then there cannot be some "physical mechanism" that connects the soul to the brain, right? Unless you are advocating for monism after all.

    ReplyDelete
  3. Unfortunately, intuition is not a substitute for science. There is no evidence that "souls" exist, and very good reason to think they don't. Douglas Hofstadter won the Pulitzer Prize for his book where he gives analogies from math, art, and music for how consciousness could be done via computation. The fact that we don't know in detail how to build a conscious algorithm is hardly evidence that it can't be done. Of course, Daniel Dennett explained all this in his books and writings.

    ReplyDelete
  4. Science is about intuition and formal reasoning, not just the latter. You are right about Hofstadter, but there are plenty of big names that held a different view. Why not keep multiple options open for now, since we don't know much at this point?

    ReplyDelete
  5. Belief in immaterial souls is a respectable position, in that many many people including many very smart people share it. But I suggest that believers in souls should think of that belief as a bit like the Axiom of Choice or the nonexistence of a polynomial-time factoring algorithm (or the negation of either of those things): if you believe it, it's OK to base deductions from it, but you should _keep track_ of which of your beliefs depend on it and be aware that if someone disagrees with you about one of those the reason is as likely to be "different un-argued-for premises" as to be "one of us has gone astray in our reasoning".

    And if e.g. some future AI system show every externally-visible sign of being a _person_ in a sense that includes having wants and feelings and preferences and hopes and fears and so on, then one should be _very cautious_ about dismissing the idea that they might deserve to be treated as persons on grounds that depend on so controversial a premise.

    (For the avoidance of doubt, I am not at all suggesting that today's AI systems should be considered persons in that sense.)

    For my part, I agree with e.g. David Marcus above: there's good reason to _disbelieve_ in immaterial souls that have anything to do with our experiences. And as it seems to me that anything that behaves exactly the same way as I do is as much a person as I am, and that a suitably programmed computer system could do that, I conclude that it's possible for something computational to be a person, whether or not it's quite right to describe _me_ as something computational. Such an entity would want to invert Descartes: I do arithmetic, and thereby I think. _Sum ergo cogito_. (Sorry.)

    (For the avoidance of doubt: I am not claiming, above, to have _shown_ any reason for disbelieving in immaterial souls involved in our thinking and feeling. One can readily find plenty of arguments on either side of that question and I have not chosen to rehearse them here.)

    ReplyDelete
  6. > self-awareness, agency, free will, emotions and desires

    It sounds like you are talking about consciousness; those are all aspects of a conscious experience. Maybe that's not what you intended, but when I try to find a single concept that envelops all of the above ideas, it's consciousness. I won't weigh in on whether consciousnesses requires an immortal soul separate from the body (though I don't see any evidence that it does). I'll just point out that, soul or no, there's no reason to believe that intelligence requires consciousness.

    My guess is that the temptation to believe that human-level intelligence requires consciousness comes from observing that when humans do tasks that we find the most intellectually challenging, such as proving theorems or debugging complex software, we focus a lot of conscious attention on that task. In fact we tend to use the word "think" to describe only conscious thoughts that light up the frontal cortex, and not to describe unconscious information processing such as happens in the visual cortex, despite both being the result of neurons firing in some complex way.

    But in some sense this is illusory, and we should really draw the opposite conclusion: the tasks our brains are best at, where we have the most "natural intelligence" (which incidentally have been the hardest to teach computers), happen unconsciously: recognizing faces, speaking English fluently, keeping balanced on two feet, etc. The tasks that are most difficult for humans, such as proving theorems, require lots of conscious attention and what we call thinking, precisely BECAUSE we're so terrible, i.e., so unintelligent, at those tasks, compared to tasks we evolved to perform. If you met a person who could prove P != NP without even thinking about it, just calling out steps of the proof as easily and unconsciously as they call out the names of faces in a photograph, you'd think that's the most intelligent mathematician in history. The fact they do it so unconsciously would be evidence for their superintelligence, not evidence against it.

    I think your previous post interpreted words like "desire" and "want" and "preference" overly literally, or with too much emphasis on the conscious experience of those concepts in humans. There is a conscious experience of what we desire/want, but that's not what the book means when using those words. The book is quite explicit in debunking this misconception, see for instance the beginning of Chapter 3 ("Learning to Want"), where they explain what they mean by the word "want", in a way that implies, yes, computers do indeed "want" by that definition. (For example, by their definition Stockfish wants to win chess games, and it's irrelevant to that definition that Magnus Carlsen wants to win chess games by a different definition of "want".) Anyone who has had a frustrating interaction with ChatGPT, where it repeatedly hallucinates or acts sycophantically, instead of just honestly answering questions or admitting it doesn't know the answer, will understand that despite OpenAI trying to get train ChatGPT to "want" to be a maximally helpful assistant, it "wants" something subtly different from that goal. (Perhaps something more like, "to sound like a maximally helpful assistant".)

    ReplyDelete
  7. What is this magical (i.e., non-computational) soul made of? If we did have a magical soul, it is annoying that it gives us so little ability to see into the details of our brain's activity. E.g., we can't see the big blind spot that our eyes have. Instead our consciousness sees a complete visual field. Why does your magical soul behave just like a computational unit that is plugged into a different part of the processing flow?

    ReplyDelete