Thursday, September 27, 2007

WHERE to apply to grad school?

Its the time of year when Seniors who want to go to Graduate School should be pondering WHERE to go. There are other issues which I will address in later posts, but

Today's topic is WHERE TO APPLY?

I would like YOU (the readers, and time magazines MAN OF THE YEAR for 2006) to comment on:
  1. IF a student wants to do COMPLEXITY THEORY where should she go?
  2. IF a student wants to do COMBINATORICS (in a math dept) where should he go?
  3. IF a student wants to do XXX (in a YYY dept) where should ZZZ go?
Note that there may be lesser-well known schools (e.g., not on someone's arbitrary top 10 ranking) that are really good in these topics, and they should not be overlooked.

Tuesday, September 25, 2007

Andrej (Andrey) Muchnik-a late memorial

(Guest Post by Alexander Shen. Thanks to Lance for Suggesting it.)

Andrej (Andrey) Muchnik died unexpectly last March, but the news didn't spread out, so I am thankful to Lance Fortnow and Bill Gasarch who give me the opportunity to write a few words about him. His death was a sudden blow not only for his family (his parents, Albert A. Muchnik, of Muchnik - Friedberg solution of Post problem, and Nadezhda M. Ermolaeva; both were students of Petr S. Novikov; and his brother Ilya) but also for all his colleagues.

Andrej, whom I knew since our undergraduate studies, was not only the brilliant mathematician, but a deep thinker. He lived in his own world -- a very rich one that was not completely unrelated to a "real life" (i.e., the mess around us), but still clearly separated from it, and the interactions between these two worlds were quite difficult and often painful. His mathematical interests were driven by internal logic of the subject, not the current "fashion"; sometimes he rediscovered an old result not knowing about it; at some other times his results (being not published or published only in a short note) were rediscovered later by others. Probably his most known result is an elementary proof of Rabin's theorem (decidability of second-order monadic theory of two successors), invented while Andrej was a fourth-year student, and its generalizations; my personal favourite is one of his last results saying that for any strings A and B there is a string C such that |C| [length] is about K (A|B) [conditional Kolmogorov complexity]; K(C|A) is negligible and K (A|B,C) is negligible (all up to logarithmic terms).

a seminar in Moscow that was started by Kolmogorov himself; the work of this seminar and its participants was deeply influenced by Andrej, and I think that all we agree that most non-trivial ideas discussed in this seminar and most interesting results obtained by the participants of the seminar were due to Andrej (or at least inspired by him). Personally I am extremely grateful to him for all his support and encouragement and very sorry that I didn't tell this to him explicitly and didn't help him enough while he was alive.

See the seminar site note (both Russian and English) (written mostly by Andrej's teacher and friend, how helped him a lot, Alexey Semenov)

some papers of Andrej (not all -- we are still trying to prepare some unfinished papers for publication) can be found here

Monday, September 24, 2007

The Amish and Cell Phones

As most people know the Amish avoid using certain technologies. They think that some technologies are disruptive to the community. In the past the Bishops would meet and decide if a certain technology is okay to use. They do not use electricity over wires, but batteries are okay. However note that with electric wires or phone wires or cars (which need roads) the Amish Bishops can enforce their decisions. Not so anymore.

A long time ago they banned phones. They later made some allowances for having a phone booth for emergencies, but no phones in the house, for it would disrupt family time (the ultimate DO-NOT-CALL list). But more and more Amish are doing business with the outside world and as such phones are needed. More and more of them are using Cell phones (see Look whose talking).

I do not think Cell Phones are the real issue here. The real issue is that we now have technologies that an individual can use without the permision of the community. We also have dual-use technologies, so rules like `you can use it for business but not for entertainment or gossip' may be hard to control.

I was told by an Amish Man that the Bishops have ruled against Cell Phones and computers. But as batteries get better (and most of them have contacts on the outside who can recharge for them) I'll be curious how well this holds up. Will the authority of the bishops and the desire to hold the community together be enough to make the rules self-enforcing?

Friday, September 21, 2007

Math on TV

There was an episode of NUMB3RS (which starts its fourth season next Friday) that mentioned the the Kruskal Count (See also this link) This is a card trick invented by Martin Kruskal. His son Clyde Kruskal is in my dept. Martin passed away in late December of 2006 and it is likely they put that reference in as a tribute. (See this and this for comments on the memorial service.) I watched the episode with Clyde. The good news is that YES, they did indeed mention The Kruskal Count and thats kind of cool seeing someone you know mentioned on NUMB3RS. The bad news is that the reference made no sense whatsoever. Here are two items that make as much sense as what they said.
  1. The list of suspects looks random but we know that its not. To solve the case we use the Nisan-Wigderson derandomization technique.
  2. We know the murder took place within this 5 block by 5 block area. We know that there were 10 people interacting to plan it. We can solve the case by looking at both the space and the interactions and then applying the Lund-Fortnow-Karloff-Nisan theorem that changes space into interactions.
If math you did was on a TV show, would you mind if they got it completely wrong? Personally, I would not mind that, I would be delighted (and surprised) to find my name on TV. A bigger issue- if it was a bad episode I really would not like that since I would probably want to show it to friends and family, and I would not want to subject them to bad TV. (The episode of NUMB3RS with Kruskal Counts was a terrible episode.) SUGGESTION: Take some math that you have done and see if you can find a way to make it fit into an episode of NUMB3RS It does not have to make sense, but it should sound like it does. (As I did above.)

Monday, September 17, 2007

Is Immunity Interesting?

In my graduate complexity class I proved
For all computable T there exists a decidable set A such that A ¬in DTIME(T(n)).
I then had on the homework
For all computable T there exists a decidable set A such that, for all Turing Machines M that run in T(n) time there exists an infinite number of strings x such that A(x) &ne M(x)
I then had extra credit
For all computable T there exists a decidable set A such that, for all Turing Machines M that decide A, for almost all inputs x, M(x) takes more than T(|x|) steps.
Katrina LaCurts, one of my students (who is thrilled to be mentioned by name in this blog!) asked the following: Is the HW and Extra Credit just to reinforce knowledge OR is the notion of differing infinitly often an important topic within complexity theory?

How important is the notion of sets differing infinitly often? A quick-and-dirty measure is to find the number of papers on this topic. A quick-and-ditry way to do that is to grep `immunity' in Joel Seifras's Theory Database and then edit. Once done I get the following list

So I could answer Katrina, there are at least 21 papers on this topic That shows people have worked on it (and some quite recently) but does not quite answer the question. Hence I ask my readers:
Once we have that, for all time bounds T, there is a computable set A ¬in DTIME(T(n)), why do we care about getting an A that differ infinitely often, or other variants? More generally, why is the notion of having two sets differ infinitely often important. I ask non-rhetorically and politely. Note that, no matter how you answer, Katrina still has to do the HW.

Thursday, September 13, 2007

Question and Metaquestion about Students emailing you problems

I recently got an email that said roughly the following:
Hi Bill,
I am a grad student in combinatorial optimization, and I have a question that I was hoping you could answer: does "FOO is APX-complete" imply "FOO is MAX SNP-complete", vice-versa, or neither?
To be honest this might be very simple... but given that this is essentially the domain of computational complexity I was hoping that either you would know the answer, or otherwise that your blog's readers might have some suggestions. (Basically your blog is currently the main source of computational complexity to my brain, so it seemed natural to ask you when I became confused.)
My impressions from reading up are the following:
  1. APX is the subset of NP optimization problems with constant-factor polytime approximations
  2. MAX SNP has a much more complicated definition
  3. APX contains MAX SNP but I don't know if the containment is known to be strict

To motivate my question, many papers say "the FOO problem is MAX SNP-hard and also APX-hard" but is there currently a point in stating both? As I researched the topic, I found that the reductions allowed in the definition of "APX-hard" and "MAX SNP-hard" are apparently slightly different... and around here my confusion set in.
One compendium, at least, uses simply "APX-hard" by convention: here
I hope this is up your alley, but if not then no worries.

Sincerely,

NAME DELETED FOR THIS BLOG POSTING
This email raises several questions and metaquestions
  1. The question on APX might be interesting.
  2. Is this a HW question? Is she cheating on it by asking me? Should I answer it? My inclination on this one is that its NOT a HW, it is legit. However, I don't know the answer, but if one of you does, by all means post (she knows I am posting this to the blog). By contrast, someone once posted to a readnews group (remember those?)
    Someone out there please help me with this problem: why is {a^n | n prime} not regular? I'm not a student asking for the solution to HW 4, problem 2. Honest I'm not!!
    Looks like a student doing a HW problem.
  3. ``your blog is my main source for complexity theory'' A scary thought. She needs to get more sources- like Luca and Scott's blog. Or maybe books (remember those?).

Wednesday, September 12, 2007

SODA papers are out. Plus...

Request: If you want me to post a CALL FOR PAPERS or LIST OF ACCEPTED PAPERS or whatnot for a theory conference or some other conference, please email me and I will almost certainly do it. Not so good to make the request in a comment to another posting since some people do not read the comments (gee, I wonder why :-) ) For those who did not read the comments from yesterdays blog, SODA paper list is out and is here

This raises the questions of which conferences have the most complexity theory in them (say by percent). Here is my rough guess.
  1. COMPLEXITY
  2. MFCS
  3. ICALP
  4. FOCS/STOC
  5. LICS has an occasional article on descriptive complexity)
  6. SODA has an occasional lower bound. The few SODA papers I've been asked to subreferee have all ended up being rejected. The very fact that I am being asked to subreferee is an indicator that they are out of scope.
  7. COLT/ALT and the other Learning Theory Conferences.
If I left out your favorite conference, sorry about that.

Tuesday, September 11, 2007

Search Engines gone wild!

I was curious if my monograph Bounded Queries in Recursion Theory was still available so I went to amazon.com and typed in `William Gasarch'. It had 40 hits. (Move over Stephen King!) I have only written one book, so how is this possible. Here are some of the hits.
  1. Bounded Queries in Recursion Theory by Gasarch and Martin. This hit makes sense.
  2. Handbook of Discrete and Combinatorial Mathematics edited by I have a 4-page chapter on computability. Does this hit make sense? If I say NO then I have to say say how long a book chapter has to be before it makes sense to have a hit. So I'll say YES.
  3. The Complexity Theory Companion by Hemaspaandra and Ogiwara. I was acknowledged in the acknowledgments. Is this hit deserved? NO! (I got quite a few more of these types of hits, some for proceedings where one article acknolwedged me.)
  4. An Introduction to Quantum Computing by Pittenger. This book is in the same series as my Bounded Queries books, so the back of the book has a list of all the books in this series. Hence I got a hit. This is nuts!


amazon needs to fix its search engines to be LESS good. ~

Friday, September 07, 2007

Quantum Computing and Quantum Phy.

I have been told quite often that
You don't have to understand Quantum Mechanics to work in Quantum Computing.
Thats a good thing since I've also been told
Nobody really understands Quantum Mechanics.
I've also been told
You don't have to have studied Quantum Mechanics to work in Quantum Computing.
I am skeptical of that. However, I was wondering about the other end- if you do have a background in Physics does it help? So I asked Fred Green (of Green's Conjecture) about this since he has a PhD in Physics, works in a computer science department, and works on Quantum Computing. Here is what he said.
Learning quantum computing helped me understand quantum mechanics better. As a physicist I never thought about measurement theory or entanglement, which were foundational issues, irrelevant to what I was doing. In quantum computing, we reason directly about these things all the time.
He didn't quite answer my question, but he raised a more interesting question. Should quantum physicists learn quantum computing?

In an earlier post I noted that Jerry Seinfeld said Comedians should do lots of proofs. Not for their actual routines, but to better practice their craft. Perhaps its also good advice for people who want to be quantum mechanics (like auto mechanics, but on smaller cars) to learn some Quantum Computing. Not for their actual research, but to better practice their craft.

Tuesday, September 04, 2007

Social Process and Proofs of Theorems and Programs

How does a theorem get to be believed to be true? There was a paper on this in 1979, Social Processes and Proofs of Theorems and Programs by DeMillo, Lipton, and Perlis. The paper had several points to make:
  1. When a theorem in math is proven that is just the start of the process. If it is important enough it will be passed around the community and checked and rechecked. At some point if it survives scrutiny it will be accepted. (Makes you wonder about proofs in the literature that nobody reads- could they be false?)
  2. The people working in Program Verification want to give program-correctness the same confidence that we have in Math Theorems.
  3. This is not a good idea since Programs cannot be passed around the same way Math Theorem proofs can. (Makes you wonder about the Classification of Finite Simple Groups, or the Four Color Theorem which also cannot be passed around that easily.)


The comments on Program Verification do not really apply anymore since those people seem to have scaled down their claims to building tools to find bugs, and to automatic verification of Protocols written in a SPEC language, which seems far more plausible. (I'm not in the Program Verification Field so if someone wants to tell me I'm wrong, leave an intelligent comment.)

When I first read this article as a young grad students I was very impressed with what it said about math. YES, the proof is just the beginning, but constant checks and rechecks are needed.