Computational Complexity studies the power and limitations of efficient computation. So is efficient computation purely an abstract mathematical object or is it trying to model a real world phenomenon? I would argue the latter. Efficient computation occurs not just in computers but in biological systems, physical systems, chemical systems, economic systems and much more. Physics focuses on the "what", computational complexity on the "how".
We don't do experiments but I do see the role of computational complexity similar to theoretical physics, trying to model the world we live in.
Pure math is the study of abstract objects, applied math develops tools used in other disciplines. Computational complexity is neither, rather an attempt to understand fundamental processes that occur in the universe, much like other sciences.
Complexity theory is math. Saying that it is science because we use models of computation that attempt to model the real world is like saying that Riemannian geometry is science because it models the real world (assuming our current theories of relativity).ReplyDelete
As another example, if our understanding of the universe is completely wrong, and the correct theory is actually loop quantum gravity, this will change many things in physics, but it will not make the slightest difference to whether the time hierarchy theorems for deterministic TMs are correct. Of course, we might introduce a new complexity class called LQGP - loop quantum gravity polynomial time, but that's similar to the fact that if LQG requires a special geometry, mathematicians will start studying that, and that wont be called science.
This is a wonderful question partially because it projects so naturally into the future: "Is Complexity Theory going to be Math, or Science, or even (gasp!) Engineering?"ReplyDelete
This week's Science has a powerful tool for thinking about this question: selective sweeps. In analyzing the phylogenetic differences between human and neanderthal DNA, it is very often seen that advantageous mutations in one gene spread throughout a population, carrying surrounding sequences with them, even though these surrounding sequences are not themselves particularly advantageous.
When not just one gene, but a substantial chunk of a genome, spreads throughout a population by this mechanics, this is called a "selective sweep". And in comparing human DNA sequences with the the draft sequence of neanderthal DNA, we see clear evidence of selective sweeps being a dominant evolutionary force.
Isn't it true that selective sweeps are ubiquitous in math, science, and engineering? Isn't it true that selection sweeps are playing a dominant role in the evolution of all technical professions?
Thus we see that the future of 21st century math, science, and engineering may well be determined by whatever selective sweeps are dominant in the next few decades.
So it's an important question ... what selective sweeps are presently dominating complexity theory?
Because heck, there's only so much room in the curriculum ... dormant academic disciplines are inexorably swept aside!
"...if our understanding of the universe is completely wrong, and the correct theory is actually loop quantum gravity, this will change many things in physics, but it will not make the slightest difference to whether the time hierarchy theorems for deterministic TMs are correct."ReplyDelete
It seems to me that this discovery would also not make much of a difference to biologists, but surely you wouldn't argue that biology is not a science.
Anonymous 2, Biology is a science because there are some experiments whose result may make a difference to Biology. These experiments do not have to be the same experiments which relevant to physics.ReplyDelete
Complexity theory is not empirical at all. We don't need to observe the universe to deduce anything about Complexity theory. I completely agree with Anonymous 1. Just like mathematics is often guided by physics, Complexity theory uses models close to real world computers. However, that does not make it Science.
This IS a wonderful question, and different people seem to have different intuitions on it. I can't wait until more people comment on this, I think this will be an interesting discussion.ReplyDelete
"So it's an important question ... what selective sweeps are presently dominating complexity theory?"ReplyDelete
Do you have any examples? I have a hard time seeing it, but maybe I am too focused.
Let's try the same question with respect to theoretical physics. Is it math or science?ReplyDelete
Both theoretical physics and complexity theory clearly involve applications of mathematics.
In both there is a notion that developing good models is nearly as important as their analysis. New scenarios in computer science generate new theoretical models with great regularity. In pure mathematics this modeling aspect is not the raison d'etre.
In theoretical physics there is sometimes a distinction between "theoretical" and "mathematical" physicists, the difference being that mathematical physicists prove their results with mathematical rigor whereas theoretical physicists may use the language of mathematics but often apply techniques for which there is no sound mathematical basis (e.g. renormalization, which lets one subtract infinities but still derive meaningful quantities, the replica method, where one derives formulas that only make sense for positive integral numbers of copies but then computes the continuous limit as the number of copies goes to 0, and regular use of other non-convergent approximations). One often finds mathematical physicists in applied math departments as well as in physics departments.
In theoretical computer science we are with the mathematical physicists in the level of rigor that we require. This would suggest a connection to applied math, except that many of our methods are often very different from those of most interest in a lot of applied math departments. Though the methods of complexity theory owe a lot to combinatorics and discrete probability, the methods used cross the spectrum. Indeed algebra, algebraic geometry and recently functional analysis and metric spaces have played a significant role.
For most branches of pure mathematics one can phrase associated computational questions. For every Pi Sigma theorem of that branch - for every x there is a y - there is the associated question of the complexity of an algorithm to find such a y given x. Thus complexity theory is cross-cutting with respect to mathematics itself and has an aspect of meta-mathematics. Now, when we ask the algorithmic question about some theory of pure mathematics, is that applied mathematics? I think not.
The question is not the right one. In a field such as physics it is not an "either...or" question.
Theoretical computer science is as much a science as mathematical physics. It clearly has a lot in common with applied mathematics but it also has aspects of pure mathematics as well.
Do you have any examples (of selective sweeps)?ReplyDelete
Hmmm ... OK ... let's see ... compressive sampling is an in-process complexity theory sweep (IMHO). Category theory was a epoch-making "sweep" in algebraic geometry (and many other mathematical disciplines). In systems/synthetic biology and condensed matter physics, it's pretty clear that nanoscale dynamical simulations are the foundation of a still-accelerating selective sweep.
The high-profile disciplines of string theory and quantum computing are interesting cases. They are sweeps for sure ... but are they math sweeps or physics sweeps? The answer will depend largely on what state-space nature uses for dynamics (as usual in physics).
We know for sure that, on human scales, Nature's state-space is approximately Minkowski/Hilbert, yet (experimentally and observationally) we know rather little about the Planck-scale, Hubble-scale or large-dimension structure of that state-space ... and it is upon this fundamental question that the future of string theory and quantum computing largely hinge.
For sure, understanding more about Nature's state-space, in which we are all of us embedded—and in applying that understanding to meet this planet's urgent practical challenges—is a globally shared 21st century adventure in which mathematicians, scientists, and engineers ... and complexity theorists especially! ... *all* are making wonderful contributions!
That's why (IMHO) it's gonna be a *GREAT* century ... the century of STEAM! :)
Lance says "We don't do experiments but..."ReplyDelete
Really? Why not? While I would call myself an algorithmist and not a computational complexity theorist really the line can be quite blurry. I do experiments often -- where for me an experiment is generally a simulation, often of some random process, for one of various reasons, including checking my intuition or proof and just to try to figure out what the heck is going on.
Please be careful to speak for yourself -- "I don't do experiments" instead of "We don't do experiments" in such situations. I'm pretty sure there are theoretical physicists that do experiments (or at least get others to do them for them), and I'm even more sure that there are "complexity theorists" that do the same.
Well, in the hopes of stimulating more discussion, please let me say that I have *No* original ideas regarding STEAM, but rather have embraced (essentially without change) the STEAM point-of-view from three primary sources.ReplyDelete
These sources are listed below, *NOT* to assert that they are dogmatically correct, but to provide (students especially) with a starting-point to a wonderful and inviting STEAM literature (all three are available on-line).
STEAM Source 1: Art and Science
Donald Knuth's (famous) one-page Preface to the monograph A=B, by Petkovsek, Wilf, and Zeilberger (henceforth PWZ) can be enjoyably read by any STEAM enthusiast: "Science is what we understand well enough to explain to a computer. Art is everything else we do."
STEAM Source 2: Humans and Computers
Chapter 1 of A=B is IMHO just as good, in particular Section 1.1 Evolution of the province of human thought: "The frontiers of human thought are being pushed back by automated processes, forcing people, in many cases, to relinquish what they had previously been doing, and what they had previously regarded as their safe territory, but hopefully at the same time encouraging them to ﬁnd new spheres of contemplation that are in no way threatened by computers."
Depending on one's individual interests, some mapping of PWZ's essay is required; for example I have to map what PWZ say about "combinatoric identities" onto my own interest "projective dynamics" --- it's remarkable how broadly applicable PWZ's ideas are.
STEAM Source 3: Tricks and Ideas
Saunders Mac Lane's Mathematics, Form and Function has a fine chapter titled Mechanics: Tricks versus Ideas: "Analysis is full of ingenious changes of coordinates, clever substitutions, and astute manipulations. In some of these cases, one can find a conceptual background. When so, the ideas so revealed help us understand what's what. We submit that this aim of understanding is a vital aspect of mathematics."
As with PWZ's framework, Mac Lane's framework of separating "tricks from ideas" is very broadly applicable ... indeed ... this PWZ/Mac Lane separation (arguably) is what category theory and complexity theory *both* are all about.
Just to conclude, there are *plenty* of wonderful, passionate essays out there relating to STEAM ... and for young researchers who want to participate in the selective sweeps of the 21st century ... reading these Knuth/PWZ/Mac Lane essays (and many more) will perhaps be for some students all that that they need, to get started in catalyzing their *own* selective sweeps. Good!
Certainly these ideas launched Knuth, Petkovsek, Wilf, Zeilberger, and Mac Lane down *very* productive STEAM paths ... and so we can all be grateful to them, for sharing their ideas, and grateful too to Lance, for this fine topic.
I am with Anonymous #1.ReplyDelete
Simulations don't count as experiments, btw.
You should think of experiments in physics as verification/refutation attempts of models of the real world. These models are never complete, although they are always getting better.
Computer "scientists" always have 100% complete models of both abstract and real computers. As the model is already complete, a simulation does not tell you anything more about it.
First, I must say that I liked what Scott said on the talent show (there is a link to its video on his blog). Computer Science is more than math or science, and complexity is also. It is connected to everything, and it is not a subdiscipline of something else.ReplyDelete
I think this is a very subjective matter, so I don't think we should expect even vague consensus about what is complexity. This kind of questions are rather useless. I would ask what would be the consequence if we agree that it is A or B? It is like endless useless arguments in philosophy with no clear answer at the end.
For me, CS is more like a pragmatic approach to everything. It is not the papers we write or the results we prove that define what is complexity or CS, but rather the attitude we have toward problems and the problems we are interested in and the reasons behind this interest. I would even claim that CS is kind of pragmatic philosophy, as the problems we are dealing with are kind of philosophical questions. Let me give a relatively clear example for CS. In operating systems we have limited resources and we have to come with a practical algorithms to assign these resources to processes, and this leads to the question of "how should we share the resources in the system?", "what is a/the "fair" way of allocating them?" The same is also true for complexity, at least for me.
I know that there are people that think that complexity is pure mathematics, basically because this is their attitude toward it, and I know that there are others who would claim it is not. Others would claim that it is engineering. I don't think that there are clear border lines between the areas, but as I said I would not put a sub-discipline of any other subject.
Also to support what Michael said about excrements in CS, take a look at the two papers from 2008 here:
I think their work is a good example of experimenting.
Anonymous says: Simulations don't count as experiments, btw.ReplyDelete
LOL ... this statement reminds me of a STEAM reference I forgot ... this year's CASP9 Experiment (a site well worth visiting for any complexity theorist).
In CASP9 we get to watch a STEAM selection sweep vigorously in-progress ... because the (mainly young) CASP9 experimenters are putting into practice radically new ideas for the future of STEAM.
These young CASP competitors *definitely* would not agree with assertions that "simulations don't count as experiments", because within the world of CASP, researchers routinely run tens of thousands of (fast, cheap, uncertain) computational experiments, for every one (slow, expensive, uncertain) wet-bench experiment.
Now, which kind of experiment---physical or computational---yields gold-standard STEAM truths? In practice, both physical or computational experiments yield similarly fuzzy results ... and thus play similarly necessary roles ... such that neither can be wholly dispensed-with.
From a computational complexity point-of-view, we are (apparently) still very far from approaching the algorithmic limits to CASP-type simulation capabilities ... which is terrifically exciting for everyone involved.
Note: because I regularly attend the weekly research meeting of a of CASP competitor group, the above remarks direct observation of an in-progress STEAM selection sweep.
The intent of the above is not to argue that CASP9 reflects how STEAM should work, but rather to observe, that experiments like CASP9 are increasingly how 21st century STEAM does (vigorously!) work.
Randal Bryant's "Graph-Based Algorithms for Boolean Function Manipulation" contains a section titled "Experimental Results." Does this mean it's not a theory paper?ReplyDelete
I'll answer my own (not really rhetorical) question, by saying that I've seen plenty of theory papers that also discuss experimental results. (Bryant's just happens to have generated more citations than most.) Beyond that, there's an entire journal dedicated to "experimental mathematics." You bet pure mathematicians run -- and are informed by -- experiments about the "laws of numerical nature." Just because we can state all the rules of a model, it does not mean we understand the behavior of the model (unless lots of unlikely collapses really do occur).
This isn't limited to the modern era of superfast desktop computers and GIMPS-type internet math projects. The interplay between mathematical experiment and mathematical proof goes back to the Pythagoreans.
I see computational complexity, and TCS, as mathematical sciences.
Complexity theory is math. Saying that it is science because we use models of computation that attempt to model the real world is like saying that Riemannian geometry is science because it models the real world (assuming our current theories of relativity).ReplyDelete
The analogy doesn't hold. Geometers wouldn't care if Riemann geometry is not the model of reality, whereas computer scientists would. In fact it has already happened. When RAM became cheap enough that DFAs were no longer common TCSers massively migrated away from the study of DFAs and into the study of the polynomial hierarchy.
We don't do experimentsReplyDelete
Oh no, not the experiments == science argument again.
By that measure astronomy isn't a science.
We don't need to observe the universe to deduce anything about Complexity theory.ReplyDelete
We need to observe the universe to certify that what we are deducing is relevant to the real world.
Some people might not care about applicability of CC and those are welcome to transfer to a math department. That is a property of them as researchers, not of the field as defined within CS.
experimenting in social sciences:ReplyDelete
Anonymous says: When RAM became cheap enough that DFAs were no longer common TCSers massively migrated away from the study of DFAs and into the study of the polynomial hierarchy.ReplyDelete
You have cited a good example of a complexity theory "selective sweep."
But the particular sweep wasn't driven by RAM-induced technological obsolescence ... `cuz heck, finite-state machines are ubiquitous in mechatronic engineering even today (our own quantum spin microscope has several finite state machines embedded in it; we use them to shuttle molecules in-and-out).
Instead the sweep was driven by precisely the mechanism that PWZ describe in A=B: "Algorithms that generate proofs constitutes some of the most important mathematics being done. The all-purpose proof machine may be dead, but tightly targeted machines are thriving"
Thus, DFA as an complexity-theoretic discipline fell dormant because of the spread of tightly-targeted "proof machines" of the type that PWZ describe—a.k.a. "MATLAB StateFlow Toolboxes".
More broadly, enterprises like CASP9 represent a continuation of the sweeping trend to encompass ever-larger state-spaces—both classical and quantum—within the domain of ever-more-sophisticated "proof machines" that generate validated and verified dynamical simulations.
To borrow Knuth's phrase, what once we calculated by art (namely, our mathematical and physical intuition) we now increasingly simulate by science.
These ongoing developments all rest upon natural foundations in complexity theory ... which (IMHO) is thus becoming *the* central discipline of every college of engineering.
Lance says: "Pure math is the study of abstract objects, applied math develops tools used in other disciplines. Computational complexity is neither, rather an attempt to understand fundamental processes that occur in the universe, much like other sciences."ReplyDelete
This dovetails nicely with Mac Lance's "Mechanics developed by the treatment of many specific problems" ... if we make the following definition: "Complexity theory is the study of dynamics in all its forms."
From this point-of-view, a Turing Machine is simply a general dynamical process over a Boolean state-space ... simulation is the pullback of dynamical processes onto simpler dynamical processes .. and the Church-Turing thesis is simply the statement "All theorems about computational complexity are theorems about dynamical processes ... and vice versa."
In practice, it is evident that computational complexity is a part of mathematics.ReplyDelete
The most fundamental question of complexity theory is that of separating the polynomial hierarchy. This is a mathematical question without any doubt.
Also, in practice, mainstream complexity theory is driven by new mathematical ideas and techniques, coming from mathematicians. E.g, unique game conjecture is a purely mathematical problem.
In the mathematical sciences the act of working on a given problem may be may be purely mathematical but that does not make the field mathematics. Once you have some population genetics model understanding its behavior is a mathematical process (as well as part of the field of population genetics) but that does not make population genetics mathematics. Similarly, once we have settled on a specific question like P versus NP it becomes a mathematical question but that does not make computational complexity mathematics.ReplyDelete
Similarly, once we have settled on a specific question like P versus NP it becomes a mathematical question but that does not make computational complexity mathematics.ReplyDelete
It does, if Lances' question was on computational complexity as practiced by complexitists.
You are arguing on what computational complexity should be, not on what it is in practice.
It does, if Lances' question was on computational complexity as practiced by complexitists.ReplyDelete
As Michael Mitzenmacher said, careful with overgeneralizations. Moreover, Lance was not asking about what "complexitists" do, but what complexity theory is.
Lance was not asking about what "complexitists" do, but what complexity theory is.ReplyDelete
But just like Michael Mitzenmacher said ... isn't that a very blurry distinction? Didn't Wittgenstein (for example) make great progress by asking, not what philosophy is, but what do philosophers do? And similarly Turing, by asking not what computing is, but what computers do? And Gauss by asking, not what space is, but what surveyors do? And Mac Lane, by asking not what mathematics is, but what it is that fundamental mathematics does?
This principle perhaps is the point behind Mac Lane's observation, that "Mechanics developed by the treatment of many specific problems" ... and not by discerning the symplectic roots of mechanics ab initio.
If we ask "What do complexity theorists do?"—as a matter of daily professional practice—they analyze complex dynamical systems. And these dynamical systems include both the "tightly-targeted proof machines" that PWZ focus upon, and the ever-expanding class of broad-spectrum simulation machines, such as are fundamental to modern synthetic/systems biology and condensed matter physics.
The progress in recent decades of every kind of dynamical engine has been astounding ... from proof engines on Boolean state-spaces to large-scale quantum simulations on tensor network state-spaces ... the pace of progress makes Moore's Law look stately.
Thus, there's no need to wait for new synthetic sweeps to originate in complexity theory ... because all of STEAM is being transformed right now, by synthetic sweeps that originated in complexity theory.
isn't that a very blurry distinction?ReplyDelete
It is a somewhat blurry distinction. Furthermore I can think of several instances where what people do and what the field is can be very far apart.
Think back to the days of strong AI in the 70s, when AIers were in the business of making grand empty promises about future advances. This doesn't mean that AI is the field dedicated to grand empty promises. AI is the study of grand and very difficult problems---not promises. AIers just lost track of their true goal for a while.
Yet another distinguished essay that bears on Lance's question is William Thurston's On Proof and Progress in Mathematics ...ReplyDelete
Thurston reviews (numerous) respects in which mathematics itself is not solely about proofs—his case history of the development of fibration theory is uniquely insightful—and he suggests that a good question to ask is "What is it that mathematicians accomplish?", and more specifically, "How do mathematicians advance human understanding of mathematics?"
IMHO Thurston's questions are good ones to ask about complexity theory too. Even after we narrow our attention to dynamical simulation theory, they *still* are very good, very deep questions.
Thurston's preface to Daina Taimina's Crocheting Adventures with Hyperbolic Planes covers the same ideas in a less technical way, that perhaps for some people will be even more enjoyable.
Geometers should care about the real world's geometry. Fortunately, CS folk are more in touch with reality.ReplyDelete
Experiments on "protein structure" prediction? I forgot that proteins were taught in CS101.
Astronomy is science because they too are working with a model that they may confirm or invalidate. Their "observations" are experiments.
Something is a science if, at the end of all our theorizing and hypothesizing, you can test out the idea and see if it matches up with experience. If they don't agree, we toss the idea out or revise and retest. When that all-important step is left out, we can't call what we're doing science. The observable world is the final arbiter.ReplyDelete
"How applied mathematics became pure" by Penelope MaddyReplyDelete
The latin root *scientia* generally means knowledge, which I understand is not the common use because people usually mean physics, chemistry, biology, maybe math, and engineers are suspect. In any case, the more general knowledge is useful because it isn't restricted by discipline or the objects of study. Aristotle, for example, thought there were two types of knowledge; one involving those objects whose quantification he felt was not in question, and the other includes relations between objects which, in society, would mean that we can develop knowledge or science (of course, he didn't know latin, so maybe he couldn't be a scientist, ha ha).ReplyDelete
Computation doesn't happen in biological systems unless you believe that cell division performs multiplication by two. Multiplication, and all computation, is a mathematical abstraction.ReplyDelete
Multiplication, and all computation, is a mathematical abstraction.ReplyDelete
Addition happens in nature. You take four things join them with three things and now you have seven things. Addition then is a mathematical abstraction as much as Newton's law are a mathematical abstraction.
Computation doesn't happen in biological systems ...ReplyDelete
Hmmm ... the boundaries between logic, combinatorics, and dynamics can be *very* hard to pin down in *any* field ... especially in biology!
For example, there's a 1948 letter from von Neumann to Wiener, in which von Neumann guesses,on purely informatic gounds—and years in advance of Watson and Crick—as follows:
"Certain traits of the gene-enzyme relationship, of the behavior of some mutants, as well as some other phenomena, seem to emphasize some variants of self-reproductivity, which one would be led to investigate on purely combinatorial grounds as well. E.g.: Self-reproductivity may be symbolized by the schema A→A. What about schemata like A→B→C→A, or A→B→C→D?"
This is pretty darn impressive for 1948!
This is pretty darn impressive for 1948!ReplyDelete
No, it is not. It is completely normal from von Neumann.
If I had a wish for computational complexity, it would be that the "result" (or at least a result) would be something as useful and understandable and influential as the Gang of Four's Design Patterns. But not for general object-oriented programming, but rather for search and optimization problem-solving.ReplyDelete
I'm hopeful that a lot of the trouble you-all are feeling and expressing might dissipate if we question these sorts of dichotomies. Philosophically, whether somebody claims they're doing "theory" or "practice", or "engineering" vs "science", or "applied" or "pure" mathematics… they all are for something, and it might be more helpful to think about the kinds of problems they solve.
So: if there were a real, useful design patterns effort arising from the broader work in Computational Complexity, that might help address these confusing issues, is all I'm saying.
"Efficient computation occurs not just in computers but in biological systems, physical systems, chemical systems, economic systems and much more."ReplyDelete
I would propose that the capacity for abstraction is a primary driver of increasing complexity. Efficiency simply encapsulates a contiguous or direct trajectory.
Another primary diver of complexity is concurrence.
While science and math do have the capacity for abstraction, neither science nor math have the capacity for concurrence.
Concurrence means clocks for every granule and every granule a clock.
Modern microprocessors while marvelous feats of engineering, are still the only granule.
Even threading isn't real concurrence nor is cloud computing or clusters.
They all require software compilers which is a form of predetermination.
If software were a key to handling genuine complexity, then as OOP and OOD moved us away from the machine there should have been a new machine to take their place.
We really need start thinking in terms of metabolisms not systems.