A CS vision professor once told me "Of course we know there is an efficient algorithm for that humans can do it." Are we just nothing more than Turing machines running simple algorithms using machine learning techniques that have been hard wired into our brains through evolution. How sad.
But it's not true. I think therefore I am. I have self-awareness. A Turing machine can't be self-aware. There are people who try to formalize self-awareness and then show those formulations can be realized on Turing machines. But these don't match my intuitive notion of self-awareness so self-awareness cannot be formalized. There is something beyond computation that allows me to be self-aware, something called the soul.
I suspect you readers all have souls too but I can't prove it.
Does the soul violate the Church-Turing thesis? Does it allow us to compute things beyond that of a Turing machine? Does it allow us to compute problems much faster than a soulless computer could every do?
I think not. The soul is just another input to the Turing machine we call our brain. How the information gets from the soul to the brain is a process we may never fully understanding because reasoning about the soul requires the very soul we are trying to reason about.
A new approach to the Soul Conjecture!ReplyDelete
i am trying really hard to find the april fools thing in the post :(ReplyDelete
Given your current quantum state, all of your body can be simulated with polynomial overhead by a Quantum Turing Machine according to the Extended Church-Turing hypothesis. Thus the soul assumption is not neccessary and must be removed according to Occam's razor. In the relativized world, on the otherhand, it would be quite interesting to see a HUMAN != BQP separation relative to the soul-oracle.ReplyDelete
why do you assert that a Turing Machine cannot be self-aware?ReplyDelete
No. No, no, no. I do not have a soul. I am not self-aware. I do not have qualia, I do not have conscious experience, and I am not sentient.ReplyDelete
And I dare you zimboes to prove otherwise.
Mysticism is the right place where to look: “Nothing is left out. Nothing. That’s the love that easily combines bliss, vitality, and action at the same time. That’s compassion, that’s emptiness in form. That’s you.” ~ AdyashantiReplyDelete
The subtle falacy of "I think therefore I am" is the following: the premise ("I think") already assumes the existence of an agent ("I"); hence the conclusion is obvious ("I am").ReplyDelete
But factually there is no "I." The factual premise is "there is thought" and this makes the conclusion "therefore I am" absurd.
Are we just nothing more than Turing machines running simple algorithms using machine learning techniques that have been hard wired into our brains through evolution. How sad.ReplyDelete
It's interesting how that idea can evoke such divergent emotions. I think it's incredibly elegant that this sensation of self is based on the same forces that keep a rock from exploding or imploding, that the brain is just a piece of the universal wave function. I'm also endlessly intrigued that the beautiful complexity the mandelbrot set exhibits, because of (not in spite of) it being an incredibly simple equation.
But these don't match my intuitive notion of self-awareness so self-awareness cannot be formalized.
Far be it from me to discourage personal beliefs on this topic. If you find the idea of a soul attractive, by all means believe in it. But realize that your statement is abductive logic. That an explanation is wrong, or that you cannot think of an explanation, certainly doesn't imply one doesn't exist. Nothing strictly rules out a soul (and nothing can), but no current logic or science requires it either.
"But these don't match my intuitive notion of self-awareness so self-awareness cannot be formalized."ReplyDelete
Beg the question much?
Ok, for all of the people responding above who admit to _not_ having a soul, I think this means that it is morally ok for me to do anything I want to you, just as it is morally ok for me to turn off my computer at the end of the day. Some of us do have souls, though.ReplyDelete
I would normally assume that this is just an April Fool's Day post, but 1) it is not funny, and 2) Lance has in the past occasionally made passing remarks expressing a similar sentiment.ReplyDelete
Perhaps Lance is having a bit of a mid-life crisis and sensing his mortality?
The post was about COMPLEXITY but you are trying just to explain all by an incredibly SIMPLE set of categories: "April's Fool", "mid-life crisis", etc. How humanly.ReplyDelete
"Given your current quantum state, all of your body can be simulated with polynomial overhead by a Quantum Turing Machine according to the Extended Church-Turing hypothesis. Thus the soul assumption is not neccessary and must be removed according to Occam's razor."ReplyDelete
I'm sorry, how can you ever use Occam's Razor to justify the use of a theoretical "Quantum Turing Machine"? Methinks you don't know what Occam's Razor is.
This is April fools, right?ReplyDelete
On the other hand, if we make the assumption that human mind really is an algorithm, we can define a "millisoul" as a unit describing how human-like a particular algorithm behaves.ReplyDelete
For example --
Roomba: 1 millisoul
Google Translate: 10 millisouls
Cleverbot: 100 millisouls
Algorithm that passes a Turing test: 1000 millisouls
PS.: Matt - I agree that people who need a belief in souls to understand the difference between killing a person and turning off a computer should just continue to believe in souls.
you are all illusions created to occupy my mindReplyDelete
keep up the good work
I think it's unlikely that Turing machines alone have the capacity for self-awareness, but I DO think that an appropriately-connected, appropriately-coded network of Turing machines could be self-aware.ReplyDelete
@Jeffe - I don't have to prove anything to YOU, I only have to assure MYSELF that you have those properties. And I'm convinced by your lecture notes.
Your wifey, OLGA , was recently reminding me that you should buy her the red dressing with the blue blouse for your anniversary.
ok. message over.
I love the TCS community. However, this is a pretty shallow discussion about souls, whether they exist, and what this means for complexity. (There are a few exceptions, feel free to know that you are one of them). I'll bet a fifth grade classroom could do at least as well as us. I do like the jokes though :) Please show me WRONG (or ask questions or "listen").ReplyDelete
isn't there a proof showing that there isn't anything that multiple TMs can do that a single TM can't do?ReplyDelete
One interesting way to parse Lance's post is that it reminds us that the "lens of computation" is bidirectional (as are all lenses, both real-world and metaphical)ReplyDelete
In one direction the computational lens transmits information about the element(s) that we are computationally examining, and in the other direction, the lends transmits information about our own human nature and capabilities.
The physical, dynamical embodiment of this same principle is cavity quantum electrodynamics (QED). Cavity QED which treats photon transmitters and receivers as part of the same unitary process, in which the dynamics of each completely determines the dynamics of the other.
Soberingly (but excitingly too) the dynamics of cavity QED is many orders of magnitude more complex and subtle than the simplified dynamics of conventional QED that we learn as undergraduates from (say) The Feynman Lectures. Moreover, to make progress in quantum computing/linear optics, these bidirectional subtleties have to be thoroughly understood mathematically, physically, and in terms of practical hardware too. Feynman himself struggled with these issues (sometimes getting the physics wrong), and even today, progress toward two-way dynamical understanding is slow and arduous
Similarly, to make substantial progress in AI, it seems to me that we are going to have to thoroughly grasp the bidirectional complexity of the "lens of computation" ... mathematically, physically, and in practical human terms too.
This struggle for two-way understanding of humanity's computational lens won't be easy ... but surely it will be fun ... and it will have great practical importance too.
That is *one* reading of Lance's fine post "The Complexity of the Soul." For which, thanks and appreciation are extended to Lance and GASARCH, as usual! :)
Hmmmm ... as a further meditation, let's consider how Lance's challenging topic "The Complexity of the Soul" might be addressed from an point-of-view that is rigorously formal. For systems engineers, an often-useful, pragmatic-yet-formal way organize one's ideas as a heiros:ReplyDelete
Definition: a haeros is a set of theorems, proof technologies, and algorithms that, in aggregate, provides mathematically natural foundations for a distinctive class of technologies, enterprises, and disciplines (from the Greek αἱρέω, meaning “to grasp with the mind, understand”, also “take to oneself, choose”)."
Here the point is that the elements of a haeros can be (1) verified formally and (2) verified in practice; these two attributes distinguish a haeros from a philosophical argument.
For fun, over on Dick Lipton's blog, I have read-out the computational haeros that is instantiated in two recent AMS articles: Peter Sarnak’s Recent progress on the quantum unique ergodicity conjecture and Misha Gromov’s Crystals, proteins, stability and isoperimetry.
Broadly speaking, it turns out that the haeros of the Sarnak/Gromov articles has two components: a "reducing" computational lens and an "enlarging" computational lens, that respectively reduce and enlarge the state-space of dynamical computations (both classical and quantum).
Metaphorically speaking, nowadays we appreciate that telescopes and microscopes are free of chromatic aberration only when reducing and enlarging lenses are combined -- a point that once was very difficult to grasp for even so great a scientist a Newton.
The same is broadly true of systems engineering, which (in practice) requires a formally verifiable haeros that includes both enlarging and reducing elements ... it is great fun to read Sarnak’s and Gromov's articles from this point-of-view.
In effect, Lance's posts similarly asks us to envision a formal, verifiable haeros for cognitive science in the 21st century.
So we are lead to ask a strictly formal question: "What CS theorems already exist, or might we envision, that might provide the 'enlarging' and 'reducing' computational lenses of a cognitive-science haeros?"
This post will not try to answer this (very tough) question ... the point rather is that Lance's provocative question "What is the complexity of the soul?" may perhaps eventually be answered in Leibniz' style: by (slowly) evolving a working haeros whose reducing and enlarging elements are verifiable both formally and in practice.
As for those CS/CT theorems that might provide working foundations for a practical cognitive-science haeros ... well ... IMHO a great many of the needed CS/CT theorems exist already, and more are being discovered at a rapid pace.
Identifying these theorems, and organizing them into a working haeros, is (obviously) a mighty tough, mighty fun challenge for the coming generation of CS/CT theorists. Good! :)
Hopefully, the above addresses anonymous' challenge to "do better than a fifth grade classroom" ... the above post is written to be suitable as a reading assignment for a ninth grade classroom! :)
The self-awareness in a sense is a meta-programming, where you can control the structures by which you control the building of the computational paths. Most of modern programming languages do not appreciate this possibilities, few does, but there is no infrastructure to exploit it. The soul and intuition has similar feeling, also today nobody needs neither one, if it cannot be translated into action, sad but true.ReplyDelete
Mikle, modern autonomous vehicles *do* exhibit a specialized form of dynamical self-awarenes ... in essence because optimal dynamical controllers (for which there is a well-developed theory) generically include an optimal estimator of the dynamical state.ReplyDelete
In effect, autonomous vehicles continually update their own -- necessarily coarse-grained and/or reduced-dimension -- internal models of themselves.
The resulting autonomous dynamical behaviors can be startlingly suggestive of cognition, as seen in this video, for example.
Might this self-awareness property lift naturally to more abstract computational state-spaces? Heck ... don't ask me ... purely dynamical state-space reductions are complicated enough. :)
For sure, though, autonomous vehicle videos *are* mighty fun to watch, and they are mighty thought-provoking too.
john sidle, oh dear .... what did you do again ? did we not agree upon more succinct posts ? no more essays plz. someone please edit this guy, who does he think he is to take so much space off these comments. there is definitely no proportion to length vs quality here.ReplyDelete
I have to really disagree with anon #26. John, your posts may be long, but they're very interesting and I always enjoy reading them. I'm sure that with practice anon #26 can learn to "skip" something he's not interested in reading.ReplyDelete
This was amongst the lamest april fools jokes I have come across in last few years...ReplyDelete
Maybe it wasn't a joke. Here are some signposts.ReplyDelete
1. Roger Penrose in /The Emperor's New Mind/ tried to find such an information-enhancement mechanism via quantum mechanics. IMHO this is now regarded as refuted.
2. Mario Beauregard found at least that mystical experience does not have a single, simple brain origin---see his book /The Spiritual Brain/ with Denyse O'Leary.
3. Manuel Blum has applied complexity to consciousness---see this 2005 entry in this blog. In a special day lecture before FOCS 2009, he lent his support to the Global Workspace Theory of Baars and others.
4. Since "dark energy" is outside the Standard Model of physics (by dint of being off its energy scale at least), I interpret physics as currently grappling with substance-dualism, which has historically been the province of theologians. Which is ironic because...
5. ...it is possible that I could be satisfied by a notion of "soul" that is extensionally nothing more than the assurance of being "sole"---no multiverse or Many-Worlds copies of me! Contra Hawking-Mlodinow's book etc., I hold that existence has just one "Principal Variation" (a computer-chess term). But apparently this common-sense view now requires going outside current science to substantiate.
For one example of 5. with regard to Parade Magazine's article on autism (follow links), I don't think Dana Eisman is a soul trapped in a non-cooperating body, or in a non-cooperating rest-of-brain, but rather that's what she is, and a gift as she is. Of course, if neuro-complexity research could help her, I'm all for that!
Vadim, I have to agree with Anon #26. There are clearly readers who see "long John Sidles post" and scroll past it. If he wants to bloviate, feel big, and have people ignore him, that is his right. If he wants more people to read what he writes, he should write more succinctly in the comments section of a blog post.ReplyDelete
Hmmm ... well ... all are invited—fans and detractors most especially—to visit our QSE Group's poster at next weeks 52nd ENC Conference at Asilomar: Abstract #614: Quantum Spin Microscopy's Emerging Methods, Roadmaps, and Enterprises.ReplyDelete
This poster includes sufficient theorems and propositions relating to TCS/QIT to gratify habituès of MathOverflow and TCS StackExchange ... and yet the word αἱρέω definitely will be included too.
Please appreciate that this is primarily an engineering roadmap (not a mathematical roadmap). Moreover, it is just one roadmap among many reasonable ones (it being neither feasible nor desirable to restrict any vigorous STEM discipline to just one roadmap).
After all, a world in which mathematicians, scientists, and engineers all thought alike—or worse, seldom talked with one another—surely would be a world greatly impoverished of first-rate mathematics, science, and engineering.
As Twain's Pudd'nhead Wilson put it: "It were not best that we should all think alike; it is difference of opinion that makes horse-races." :)
I meant to include two other items.ReplyDelete
6. The philosopher David Chalmers here riffs on whether consciousness reduces to a physical process. This might be trumped by his "Matrix" theory where everything is a simulation---but I imagine that computational complexity would still constrained the simulating algorithms. He has also written on "zombies", as raised by some commenters above. The new mammoth "Closer to Truth" enterprise has him as a participant here, with oodles of video material.
7. In reply to Bruno, Russell Shorto notes on p20 of his book Descartes' Bones that Descartes' own intent was to avoid that circularity: "Thinking is taking place, therefore there must be that which thinks." I myself argue that events since Turing mandate replacing Descartes' maxim by a more-direct appeal to self-awareness, which I don't believe offends the circularity. I like to put it in French, with a nod to Quebec's motto: Je me souviens, donc je suis.
how interesting that you have arrived on the same conclusion (through computer science) that old Indian mystics like Ashtavakra, Buddha et al had arrived through spirituality. According to them, "you can not be what you can see, and you are the one who witness everything" which means you can see your body, so you are not your body; you can see your brain thinking, therefore you can not be your brain; then you can see yourself thinking, so you can not be the one who sees your self thinking; and since you can see which see yourself thinking, so your can not be that one also but beyond that; This way recursively, you keep on going beyond until you reach the state what they call enlightenment; some call them 'knowing yourself';ReplyDelete
My view is there is a series(or say network) of turing machines which operates on each other in a very complex form and your soul is that turing machine that only record things (witness) but never take any action.
Igor: FYI, your witty retort to Matt earned me a lot of karma points at this month's Less Wrong Rationality Quotes thread. Thanks! :)ReplyDelete
Daniel, Igor: I have to admit I didn't see it as a witty retort. I see it as completely failing to answer my implicit question. Let me put it differently: if a human mind simply is a Turing machine, then why is there any distinction between killing a human and turning off a computer?ReplyDelete
If you can give me an explanation of this, without appealing to concepts like "hopes", "dreams", "emotions" but ONLY describing concepts which can be expressed in the language of Turing machines (you can use the lambda calculus if you prefer), then I will be happy not to believe in souls. But if you cannot give me an explanation of this that sticks to those rules, then I think you have to acknowledge the existence of something like a soul (assuming you do believe that killing a person is different from turning off a computer, which I assume you do).
Of course, your explanation of this difference in terms of Turing machines should allow me to develop a morality that would make it possible to determine whether Igor's counting of 1 Roomba=1 millisoul is correct. To put that last phrase a bit more seriously: if a person is just a Turing machine, please explain at which point a sufficiently advanced computer does begin to acquire the same moral status as a human.
To answer Matt's "please explain at which point a sufficiently advanced computer does begin to acquire the same moral status as a human.": My minimum condition "5. sole" for having a soul already sets a formidable baseline. Computers are backed up; if one gets destroyed (not just powered off) then just rebuild and reload it. Or have two to begin with...ReplyDelete
Of course we have science-fiction tales and understand the concept of computers that could cross this boundary, such as taking responsibility for their own propagation. There is also the issue of whether human beings can ever be molecularly reproduced. But neither is in close prospect now, so my "sole" criterion works to separate them, showing the clear moral difference. Without having "sole" for humans, however, the criterion disappears.
Intermediate between computers and humans is David Chalmers' formalization of a Philosophical Zombie. A zombie can be sole, so if Chalmers is right then there is more to us than my minimum criterion.
[Word-verification is "plown": I hope I have plown this discussion a little deeper.]
KWRegan: So, turning off a computer before saving your work (i.e., when the present state is not backed up?) _is_ morally equivalent to killing a person? I think not. And if we could re-create a person molecule-for-molecule it would be morally acceptable to kill them in a slow, painful fashion and then re-create them and repeat this process over and over again? I think not again.ReplyDelete
Finally, the concept of p-zombies refers explicitly to consciousness. That is one of those concepts, like emotions and hopes that I mentioned, that goes outside the framework of Turing machines.
It seems to me that "rational people" are happy to claim that everything is just Turing machines while scoffing at souls, and yet when I dig just a little deeper, their answers are not internally consistent or implicitly depend upon concepts like consciousness. Come on, give me a real answer!
You're reading my baseline/necessary condition as if it were an equivalence---a common rhetorical unfairness. I'm trying to further a discussion into deeper territory, not digest a "real answer" into a couple of paragraphs. But anyone can find my real answer by clicking my moniker then "My Web Page" and scrolling down if need be...
Looking at the material on Ken's web page caused me to reflect upon Joel and Ethan Coen's recent soul-centric film A Serious Man (2009).ReplyDelete
Film critics had led our family to expect that this film would be a darkly humorous retelling of the Book of Job ... but by the end of the first view, we knew this was clearly wrong.
By the end of the second viewing, a popular view in our household was that A Serious Man is really a zombie movie ... it's about Professor Larry Gopnik's struggle to escape a zombie plague that, terrifyingly, has terminated the self-awareness of his colleagues and that threatens his wife and children.
Viewed as a zombie film, A Serious Man is terrifying because this particular zombie plague is seductive ... life's much easier if you get with the zombie program ... as the zombies argue very convincingly.
On the third viewing, though, we decided A Serious Man movie might really be about Turing Tests. How stringently should Turing Tests look for evidence of sympathy ("I understand your feeling") versus empathy ("I share your feelings")?
In medicine there's zero debate on this point ... physicians are trained to embrace sympathy and shun empathy. This is done by reason of practical necessity: this mode of cognition is better for both patients and physicians themselves.
Hmmm ... so are we training our medical residents to be robots? If we are, is this good? How about mathematicians? To what extent is human-level cognition instinctive, versus learned?
And can the capability for fully human cognition be lost outright (as the Coen brothers' film suggest) through what physicians call "disuse atrophy"? Hmmmm ... it appears that we'll have to watch the film again.
The point being, there's far more to human cognition than just rational deduction. Or ... is ... there? ... (as Homer Simpson would say). ;)
The feeling that you should not kill fellow human beings is admirable, but does not allow for metaphysical conclusions. That you should not desire to kill people, or consider eating a leaf of spinach in a different moral category does not have any metaphysical implications (like existence of a 'sole').ReplyDelete
I would suggest a kind of reverse Turing test though. Set up a bunch of 'users' who are told to peruse a computer program and turn the machine off after five minutes. The (apparently sentient) computer program is provided for by a human trying to act like a machine. But then, the machine is begging the user to not turn it off. Please!
Daniel, does anyone over on lesswrong actually know any probability beyond Bayes theorem (which really doesn't make one an expert as it would probably be covered in the first few lectures of any undergrad course)? I mean, could anyone over there actually, for example, apply the probabilistic method to prove lower bounds on Ramsey numbers, or do a perturbative calculation in a perturbed Gaussian theory (or, you know, even do a Gaussian integral) or compute an effective action, or use the saddle point method, or do a renormalization group calculation, or compute connected correlation functions, or apply Lovasz local lemma, or write belief propagation code, or prove any theorem from Shannon 1948, or, like, anything?ReplyDelete
Matt: What a weird question. I am a quite typical LessWrong regular (theoretical computer science background, but working on applied stuff), and I can do several of those things. Some of the regulars are programmers, doctors or lawyers. They can do fewer of them.ReplyDelete
Everyone: After my comment where I mentioned that Igor earned me much karma at Matt's expense, Matt visited LessWrong, and started a debate. He apparently did not like the results. I can perfectly understand his frustration, as LessWrong has a take it or leave it attitude that might seem closed-minded to outsiders. The policy is that we consider some questions settled, so that we don't have to debate them again and again. Most of us are compatibilist physicalist reductionists. (Of course, some are not. Some don't even care, and visit the site for, say, scientifically grounded self-help advice.) This way we can concentrate on the huge amount of stuff we still don't agree on.
This does not mean that Matt did not receive any answers to his inquiries. Please check them out at the link above. Mine is here.
Just curious, Daniel. To an outsider, the background, especially the tendency to regard questions as closed that you mention, looks suspiciously like people who think that they know everything since they know a little something. Not saying it's true about you, but it is my impression of most people there. There was also a high tendency of many people there to make huge assumptions without justification and then grind through very obvious inference steps to make it seem like the final result had been derived rather than just being assumed. It seemed like a "cult of rationality" that had forgotten that rationality usually means having doubt...you know, regardless of whether one wants to debate a philosophical concept, I find it hard to believe that a rational person can say that "with 100% certainty, Turing machines can have exactly the same kind of self-awareness that humans do, and anyone who disagrees is irrational; also, with 100% certainty, the most important thing you can do with your life is to work on AI, and again if you disagree you are irrational", and yet those appear to be the accepted beliefs there...I don't know if that is your belief or not, but it seems to be the majority. I really don't know how to say it without giving offense, but it is the strangest method of argument I've ever heard of.ReplyDelete
Actually, my goal was not a debate, but rather to try to understand _why_ this "witty retort" was seen as so "witty", and I didn't get any reply I could benefit from there. Anyway, I'll drop it.
Btw Daniel, I just followed your link to your own reply which I had not read...your comment actually makes more sense than any of the other replies I read. Again, I feel it misses my point, but at least it was much more insightful than the rest. It raises interesting issues (i.e., particularly about the interpretation of "what it is like to be X", since actually a common method that people use to understand processes that they would never call conscious is to think what it is like to be that process...i.e., the ball "wants to roll down the hill" or something). Anyway, at least you didn't say to "read the sequences and come back when you're not so irrational" :-)ReplyDelete