Wednesday, October 08, 2025

Big Bots Don't Cry

A few comments to last week's post Computers Don't Want suggested that human brains are just advanced computers, yet still possess agency and desires. But are we just Turing machines? I wrote about this question before but let's revisit in the world where artificial general and super intelligence may (or may not) be right around the corner. 

Much of what our brain does, the way we store and retrieve our memories, how we process our senses, how we reason and learn, are very much computational. There is something else, something that gives us an internal view of ourselves, that combined with the computational power of the brain leads to self-awareness, agency, free will, emotions and desires. When I see attempts to give computational explanations for our internal concepts, like the Blums' work on consciousness and free will, I see them capturing the properties we attribute to these concepts, but fail to capture the intuitive notion I have of them. I have some internal capability, a "soul" for the lack of a better name, that allows us to reason about ourselves.

I think of René Descartes and his Meditations on First Philosophy, famous for cogito ergo sum (I think, therefore I am) in the second meditation. Computation and even its execution are mathematical concepts and mathematics lies outside of any physical world. You can't reason from a computational object that it exists. Yet Descartes is able to reason about his own existence. In the sixth meditation, Descartes talks about substance dualism, a separation from mind and body. I view the body as containing the brain, the computational part of our being, but some other entity which, combined with our powerful computational brain, enables us to reason about ourselves and gives us free will, emotions and desires. 

I met a religion professor and asked him about this topic. He mentioned that he had a crying baby sitting next to him on the plane to Chicago. Babies cry for many reasons but sometimes just to be held by their mother or father, a need for companionship. I can imagine a computer doing the equivalent of crying but not the need to do so.

I can't explain how the soul interacts with our brains, I suspect it goes beyond some simple physical mechanism. I can't prove that I have a soul, and while I can't prove the rest of humanity also has souls, I believe they do since otherwise we wouldn't even have concepts like self-awareness. But I don't believe AI models, now or in the future, have something like a soul, and we shouldn't reason about them as though they do.

40 comments:

  1. What do you propose for "goes beyond some simple physical mechanism" and, more importantly, why is this mechanism unique to humans (which have been around <3 mil yrs) and not generalized to all organisms (have been around >2 bil yrs)?

    ReplyDelete
    Replies
    1. I don't have a great answer to the first question, to even call it a "mechanism" suggests it's a computational thing. For the second, it's the "soul" combined with our higher brain activity that gives us the capability to reason about ourselves.

      Delete
    2. Not sure whether you answered space2001's question. Are you saying that only humans are conscious? Only humans have "souls"? Have you read Daniel Dennett's writings on consciousness?

      Delete
  2. It is so gratifying to read this. A quick question: you are embracing dualism (i.e., distinguishing between the physical and the abstract/mathematical). The brain is physical, the soul is not. But then there cannot be some "physical mechanism" that connects the soul to the brain, right? Unless you are advocating for monism after all.

    ReplyDelete
  3. Unfortunately, intuition is not a substitute for science. There is no evidence that "souls" exist, and very good reason to think they don't. Douglas Hofstadter won the Pulitzer Prize for his book where he gives analogies from math, art, and music for how consciousness could be done via computation. The fact that we don't know in detail how to build a conscious algorithm is hardly evidence that it can't be done. Of course, Daniel Dennett explained all this in his books and writings.

    ReplyDelete
  4. Science is about intuition and formal reasoning, not just the latter. You are right about Hofstadter, but there are plenty of big names that held a different view. Why not keep multiple options open for now, since we don't know much at this point?

    ReplyDelete
    Replies
    1. I wasn't appealing to authority. I was pointing out where someone can read more about the topic. The bigness of the name is not a substitute for reading what they write and determining whether it is correct. We know enough to know that consciousness is not magic.

      Delete
    2. Is suggesting that consciousncess may not be purely computational the same as saying that it is magic?

      Delete
    3. Anonymous: What exactly are you suggesting?

      Delete
    4. We have known since Lyapunov (even before the advent of quantum modeling) that all of computation can only ever be an infinitesimally tiny subset of the physically realizalizable realm. Even a nominal Helium atom (a six-body approximation using composite paricles) defies our computational models. We continue adding "new orders" to our models to better approximate our empirical world, just as we should; the remanant error (which will never be zero) is what we colloquially call "magic".

      Delete
    5. I would like to suggest that A or not A, where A = "the human mind works like a computer". My impression—though I may be mistaken—is that you're asserting A as evidently true. However, when I speak with experts familiar with both cognitive science and algorithmic thinking, they often find A to be a rather bizarre claim.

      Delete
    6. Anonymous: What is the evidence that it is bizarre?

      Delete
    7. Smart; keep asking questions :-) I’d recommend diving into the work of Stanislas Dehaene, David Tall, or even exploring Alan Turing’s involvement with the Ratio Club. I’m name-dropping here just like you did with Hofstadter—fair game!

      Also, consider reaching out to cognitive scientists or neuroscientists—people outside the computer science bubble. They often offer insights that go beyond what I can convey in this format.

      Take, for example, how a child learns mathematics: yes, there are algorithmic elements involved, but there's also a rich layer of intuition—a kind of “helicopter view”—that isn’t strictly algorithmic. That blend of logic and intuition is where things get really interesting. I'm going for A or not A. You seem to be going for A.

      Delete
    8. According to Wikipedia, Stanislas Dehaene "has developed computational models of consciousness". This doesn't sound like someone who thinks minds aren't computers. As for learning mathematics, the intuition comes from having conceptual models that can be applied to new problems. Of course, some of this may be innate. What aspect of this is not "strictly algorithmic"?

      Delete
    9. Please _read_ the books or papers by Dehaene, Kugel, Tall. I don't think you want to be persuaded to think more broadly. You have made up your mind. Sources by Horace Barlow on the ratio club clearly show that they did not share your line of thought. Good bye.

      Delete
    10. A quick search suggests that Dehaene, Tall, and Barlow all were interested in the computational aspects of the brain. If you think otherwise, you will have to be more specific. Not sure which Kugel you are referring to.

      Perhaps you have a narrow idea of what computation can do.

      Delete
    11. You are right. I did not read any of these books at all. I made it all up. I never read Hofstadter either. Even Peter Kugel, who wrote about going beyond TM computation (https://cacm.acm.org/author/peter-kugel/) is actually advocating your viewpoint all along. I'm sorry for taking up your time. But thank you for convincing me.

      Delete
    12. A quick on-line check (following your modus operandi) reveals that S.Dehaene "sees the brain as a biological system that implements computations, but one whose full complexity — especially in terms of consciousness and creativity — might exceed what we can currently formalize." You could have gotten this from online sources too and is precisely what I've been saying all along. It immediately challenges your take on algorithmic consciousness. Nor do I claim anywhere above that any of these names were not interested in computational aspects of brains and minds. Quite remarkable.

      Delete
    13. I believe anon above is saying that a TM does not capture such rudimentary everyday real-world concepts as "good enough', "maybe", "maybe not", "wont know until we try it"; these concepts are infact antithetical to our computing (or algorithmic) paradigms which demand a deterministic repeatable result within a deterministic time.
      A rock rolling down a hill does not take a "computable" path or come to rest at a "computable" spot and orientation (same is also true for light, that's how we get twinkling stars); the rock just finds a good-enough path down the slope and a good-enough final configuration (which oftentimes is unstable and the rolling then continues after a perturbation).
      All living phenomena have the same underlying characteristic; a pine tree does not have a single straight fiber within but still grows straight as a plumb-line by figuring out gravity as it grows.

      Delete
    14. Thank you for suggesting Stanislas Dehaene's work.

      "What Is Consciousness, and Could Machines Have It?" by Stanislas Dehaene, Hakwan Lau, and Sid Kouider. https://www.science.org/doi/10.1126/science.aan8871

      'However, much like artificial neural networks took their inspiration from neurobiology, artificial consciousness may progress by investigating the architectures that allow the human brain to generate consciousness, then transferring those insights into computer algorithms. Our aim is to foster such progress by reviewing aspects of the cognitive neuroscience of consciousness that may be pertinent for machines.'

      'Our stance is based on a simple hypothesis: what we call "consciousness" results from specific types of information processing computations, physically realized by the hardware of the brain.'

      'Although centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions, the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.'

      So, Dehaene is firmly in the minds-are-computation camp.

      Delete
    15. Kugel is in the same boat, since you're stretching the definition of "computer" in our exchanges. If you're not referring to a computer strictly as a Turing machine or as something purely formal in classical predicate logic, then we’re not actually disagreeing all that much. And, to be fair to Dehaene, even your quote doesn’t contradict what space2001 argues: some things—if not many things—may not be formalizable. It's also telling how you avoid discussing Turing himself. Then there's a nobel prize winner, called, Penrose, and so on. This dicussion merely started with me informing you of communities outside of the comp. sc. bubble. But you seem to want to say that the mainstream in any discipline on this planet views the mind as a logic TM engine. That's simply factually false. Not much point in debating this imo. Please _read_ Dehaene's books. Your quote suggests he is a physicalist, not per se somebody who abides by the CTT.

      Delete
    16. Computation means what a Turing machine can do. I'm not "stretching" anything. Nor did I say anything about the "mainstream".

      You mentioned several people, but didn't give specific references. I found a recent paper by one of the people you mentioned, and discovered that the author says the opposite of what you said.

      If you have specific references, then we can discuss them. But, just because someone claims that minds are not just computation does not mean that they are correct. What about Turing did you want to discuss?

      Penrose wrote a book, "The Emperor's New Mind", claiming that minds are more than computation. The only evidence that he gave in his book was Gödel's theorem (using an argument originally due to Lucas) Several logicians explained to Penrose that he had made a mathematical error. Anyone who has taken a course in mathematical logic should be able to spot the error.

      Penrose didn't believe the logicians. It seems he thought he had a philosophical difference with them. But, they were pointing out his mathematical error. This error left his book with no evidence for his claim. Penrose then wrote a second book, "Shadows of the Mind", based on the same error.

      In Chapter 4 of "The Emperor's New Mind" in the "Mathematical Insight" section, Penrose writes:

      "The insight whereby we concluded that the Gödel proposition P_k(k) is actually a true statement of arithmetic is an example of a general type of procedure known to logicians as a *reflection principle*: thus by 'reflecting' upon the *meaning* of the axiom system and rules of procedure, and convincing oneself that these indeed provide valid ways of arriving at mathematical truths, one may be able to code this insight into further true mathematical statements that were not deducible from these very axioms and rules."

      Of course, that's wrong. Logicians prove their theorems, as all mathematicians do.

      Delete
    17. You are right about Penrose's two books and logicians complaining. As I understand, Turing is not _that_ different from Penrose: https://link.springer.com/article/10.1007/s11023-023-09634-0
      And I have given you Kugel as a specific counter example. I don't see Dehaene equate "computation" with TM computation. I have read two of Dehaene's books and both of Penrose.

      Delete
    18. Are you simply listing people who think minds are more than computation? I'm sure that's a long list. The relevant question is whether any of these people have any evidence for their belief.

      I'll repeat the Dehaene quote: 'Our stance is based on a simple hypothesis: what we call "consciousness" results from specific types of information processing computations, physically realized by the hardware of the brain.'

      Penrose is simply wrong, i.e., he made a simple math error. So, why would you read Penrose's books? Didn't you notice his huge error near the beginning of his book?

      Computation is what a Turing machine does. Are you making up new meanings for standard words?

      Delete
    19. + "I'm sure that's a long list." Good. That was the first point I wanted to make. It includes Turing, Lucas, Penrose, Kugel, etc.
      + "The relevant question is whether any of these people have any evidence for their belief." I agree. But I personally am trying to be agnostic: A or not A. Where is your evidence that I should go for A?

      + 'Our stance is based on a simple hypothesis: what we call "consciousness" results from specific types of information processing computations, physically realized by the hardware of the brain.' Let me repeat myself too: is information processing by definition a TM? Yes, according to comp. scientists. No, according to many people outside of comp. science.

      + Penrose is simply wrong, i.e., he made a simple math error. So, why would you read Penrose's books? Didn't you notice his huge error near the beginning of his book?
      --> I am not convinced that Penrose was wrong and that the logicians have proven him wrong. There is no proof of what you say. It is a metamathematical argument (made e.g. by Martin Davis) which I don't buy. I've spoken to many logicians about this; they can be really dogmatic _without_ providing a proof.

      + "Computation is what a Turing machine does. Are you making up new meanings for standard words?" No. I'm saying that "computation" is not a TM for everyone, including many software engineers and people outside of standard comp. sc.

      Delete
    20. Penrose's argument that minds are not computers is the following: Gödel constructed a sentence that is true, but not provable. Since it is not provable, the formal system cannot tell that it is true. But, humans can. Therefore humans are more than formal systems (computers).

      This is just wrong. Gödel's Theorem is actually that if the formal system is consistent, then the sentence is not provable. And, the formal system proves Gödel's Theorem; that's how we know it is a true theorem. But, neither the formal system nor we humans know whether the formal system is consistent.

      Mathematicians prove their theorems. Penrose doesn't understand what math is.

      For a longer explanation, see

      Barr, Michael, review of "The Emperor's New Mind", American Mathematical Monthly, Vol. 97, No. 12, Dec. 1990, pp. 938-942. https://www.jstor.org/stable/2324352

      Delete
    21. You do realize that Godel himself was an anti-computationalist? Martin Davis only takes part of Godel's argument and then discards the rest as "platonic nonsense' when debating Penrose. It's not about Penrose versus mathematicians. It's about logicians imo. (Mathematicians ain't logicians.)

      Delete
    22. It is straightforward to establish that thinking and consciousness (across all animals for certain; I would add that plants do "think" in their own realm) is nothing like any human invented computing formulation. All living processes are completely decentralized with microscopic molecular machinery having complete autonomy on their actions but these agents are constantly communicating extensively via physical and electrochemical modes (truly multipartite interactions that we will never be able to unravel); this is unlike any of our computing formulations which rely on a central orchestrator to sequence and coordinate the actions of multitude subcomponents (the subcomponents are not autonomous agents in a real sense).

      Cell biologists have assiduously searched for evidence of some such coordinator within cells (or living organisms in general) and have come to the inevitable conclusion that none exists; the fact that living phenomena emerge from the autonomous actions of microscopic intracellular organelles is now established beyond doubt. It's try this, and try that, to figure out what works all the way down to the molecular level; an ant colony could be seen as a macroscopic analogue. Given the highly decentralized nature of autonomous activity inside a cell there is a ton of doing and undoing (error correction) happening but the end result is that living phenomena emerge from such actions (we may never figure out how this happens but we can sure marvel at the end result).

      Just one illustration of what such gobsmacking activity looks like; I see no reason to believe that consciousness (or thinking) would work any differently
      https://phys.org/news/2025-04-simple-genome.html

      Delete
  5. Belief in immaterial souls is a respectable position, in that many many people including many very smart people share it. But I suggest that believers in souls should think of that belief as a bit like the Axiom of Choice or the nonexistence of a polynomial-time factoring algorithm (or the negation of either of those things): if you believe it, it's OK to base deductions from it, but you should _keep track_ of which of your beliefs depend on it and be aware that if someone disagrees with you about one of those the reason is as likely to be "different un-argued-for premises" as to be "one of us has gone astray in our reasoning".

    And if e.g. some future AI system show every externally-visible sign of being a _person_ in a sense that includes having wants and feelings and preferences and hopes and fears and so on, then one should be _very cautious_ about dismissing the idea that they might deserve to be treated as persons on grounds that depend on so controversial a premise.

    (For the avoidance of doubt, I am not at all suggesting that today's AI systems should be considered persons in that sense.)

    For my part, I agree with e.g. David Marcus above: there's good reason to _disbelieve_ in immaterial souls that have anything to do with our experiences. And as it seems to me that anything that behaves exactly the same way as I do is as much a person as I am, and that a suitably programmed computer system could do that, I conclude that it's possible for something computational to be a person, whether or not it's quite right to describe _me_ as something computational. Such an entity would want to invert Descartes: I do arithmetic, and thereby I think. _Sum ergo cogito_. (Sorry.)

    (For the avoidance of doubt: I am not claiming, above, to have _shown_ any reason for disbelieving in immaterial souls involved in our thinking and feeling. One can readily find plenty of arguments on either side of that question and I have not chosen to rehearse them here.)

    ReplyDelete
  6. > self-awareness, agency, free will, emotions and desires

    It sounds like you are talking about consciousness; those are all aspects of a conscious experience. Maybe that's not what you intended, but when I try to find a single concept that envelops all of the above ideas, it's consciousness. I won't weigh in on whether consciousnesses requires an immortal soul separate from the body (though I don't see any evidence that it does). I'll just point out that, soul or no, there's no reason to believe that intelligence requires consciousness.

    My guess is that the temptation to believe that human-level intelligence requires consciousness comes from observing that when humans do tasks that we find the most intellectually challenging, such as proving theorems or debugging complex software, we focus a lot of conscious attention on that task. In fact we tend to use the word "think" to describe only conscious thoughts that light up the frontal cortex, and not to describe unconscious information processing such as happens in the visual cortex, despite both being the result of neurons firing in some complex way.

    But in some sense this is illusory, and we should really draw the opposite conclusion: the tasks our brains are best at, where we have the most "natural intelligence" (which incidentally have been the hardest to teach computers), happen unconsciously: recognizing faces, speaking English fluently, keeping balanced on two feet, etc. The tasks that are most difficult for humans, such as proving theorems, require lots of conscious attention and what we call thinking, precisely BECAUSE we're so terrible, i.e., so unintelligent, at those tasks, compared to tasks we evolved to perform. If you met a person who could prove P != NP without even thinking about it, just calling out steps of the proof as easily and unconsciously as they call out the names of faces in a photograph, you'd think that's the most intelligent mathematician in history. The fact they do it so unconsciously would be evidence for their superintelligence, not evidence against it.

    I think your previous post interpreted words like "desire" and "want" and "preference" overly literally, or with too much emphasis on the conscious experience of those concepts in humans. There is a conscious experience of what we desire/want, but that's not what the book means when using those words. The book is quite explicit in debunking this misconception, see for instance the beginning of Chapter 3 ("Learning to Want"), where they explain what they mean by the word "want", in a way that implies, yes, computers do indeed "want" by that definition. (For example, by their definition Stockfish wants to win chess games, and it's irrelevant to that definition that Magnus Carlsen wants to win chess games by a different definition of "want".) Anyone who has had a frustrating interaction with ChatGPT, where it repeatedly hallucinates or acts sycophantically, instead of just honestly answering questions or admitting it doesn't know the answer, will understand that despite OpenAI trying to get train ChatGPT to "want" to be a maximally helpful assistant, it "wants" something subtly different from that goal. (Perhaps something more like, "to sound like a maximally helpful assistant".)

    ReplyDelete
  7. What is this magical (i.e., non-computational) soul made of? If we did have a magical soul, it is annoying that it gives us so little ability to see into the details of our brain's activity. E.g., we can't see the big blind spot that our eyes have. Instead our consciousness sees a complete visual field. Why does your magical soul behave just like a computational unit that is plugged into a different part of the processing flow?

    ReplyDelete
  8. Soul, mysticism, metaphysics and all that can promply be set aside via an Occam's razor argument. Each snowflake is unique and so is each leaf (or each cell) but there's no mistaking a real snowflake, a real leaf or a real cell. All these are underpinned by the exact same electrophysical principles for which we have developed very precise models (with the caveat that all models are incorrect yet some are useful). What we can detect and determine is that macroscopic structures, properties and phenomena are able to emerge from microscopic properties and quantum phenomena (when conditions are right). The situation is no different than magnetism or superconductivity (or simulated emission) where we cannot fully explain the emergence of such phenomena. We can appreciate life without taking recourse into metaphysics the same as we can appreciate all other emergent phenomena.

    ReplyDelete
  9. @lance: I am not sure about the René Descartes argument ... when I read it I was like a teen but I recall perhaps incorrectly that Descartes relied on the existence of a good, non-deceiving God to validate his arguments and guarantee the truth of his knowledge. In mathematics, the analogue to this would be ... starting off with something reasonable but then having to massage the proof to some rather unreasonable assumptions which ultimately undermine the initial course of action. Recall how we can prove that 1=0?

    ReplyDelete
    Replies
    1. yes yes, god is dead, we get it

      Delete
  10. @lance: I think the one bit we are missing in this discussion which is most relevant, is the episode of Star Trek in which Data gets the chip for emotions implanted or decides to get keep it.
    The emotion chip and the story of its creation and retrieval are featured in several TNG episodes:
    "Brothers" (Season 4, Episode 3): Data is first called to his creator's lab to receive the emotion chip, but his evil brother Lore intercepts the message and steals it instead.
    "Descent, Part I" (Season 6, Episode 26) and "Descent, Part II" (Season 7, Episode 1): Lore returns, manipulating Data and other androids. In the resolution of this story arc, Data defeats Lore and retrieves the salvaged, but damaged, emotion chip.
    Even after recovering the chip, Data hesitates to install it for years. He finally decides to do so during the events of the movie Generations, which immediately follows the television series.

    ReplyDelete
  11. “Computation and even its execution are mathematical concepts and mathematics lies outside of any physical world.”

    What do you think of the following? “A computer is a physical device with physical states and causal interactions resulting in transitions between those states. Basically, certain of its physical states are arranged such that they represent something, and its state transitions can be interpreted as computational operations on those representations.” (See ‘The Computational Brain’ by Patricia Smith Churchland and Terrence J. Sejnowski.)

    ReplyDelete
    Replies
    1. You can treat mathematics as a causal enterprise: a theorem may hold tomorrow but not today. Or, more likely, you can treat mathematics as non-causal. In the latter case: you can treat the relationship between a physical execution and a mathematical execution as isomorphic. Alternatively, you can treat the relationship as context dependent and not isomorphic in every engineering context. My laptop has "physical states" because I have abstracted away actual glitches that do occur every now and then, etc.

      Delete
  12. Seems Lobachevsky was not understood well by everybody even by Tom Lehrer, but this is smth about free will
    https://music.youtube.com/watch?v=LyZ8a1iM09w&si=PJ-uEZ-1JSxxKvqw

    ReplyDelete
  13. "does it matter if you cannot tell the difference?"

    ReplyDelete
  14. Formalizing an agent that can reason about itself seems like a hard problem, and maybe only 2 people are working on this today, but I'm optimistic that this problem can be solved satisfyingly.

    ReplyDelete