Thursday, April 28, 2011

How important is Teaching Experience on the job market (guest post)

( Annoucements: New York Theory day May 13 and UMCP Theory Postdoc opening. )

This is an anon guest blogger. Even we don't know who this is! He or she emailed us about the topic and we invited him or her to do a guest blog on it. I have added my comments on it as well.

I'm a PhD from a Tier-I research university (about to start a postdoc at an institution of similar caliber). Outside of begin a TA (i.e., grading), I have never taught a course, and I have received conflicting advice about the importance of teaching experience on the academic job market. From some people I have heard that having plenty of teaching experience is a plus on the job market. From others I have heard that teaching can only take away from time better spent doing research. I have even heard that teaching more than a couple of courses on one's own is potentially harmful to a CV. The job market being what it is, I am planning on applying to jobs at universities and colleges all across the spectrum. Would your readers be kind enough to share their thoughts on this issue?

How much teaching experience should one have when applying for: TT jobs at Tier-I research universities; TT jobs at Tier-II research universities (please interpret "Tiers" liberally; I don't mean for this question to spark any debates about specific places); teaching posts at research universities; teaching posts at liberal arts colleges. (For instance, does someone with an excellent research background, but no teaching experience not stand a chance of getting a job at a liberal arts college? Thoughts?

Here are Bill's comments:
  1. I find the notion that having lots of teaching on your CV as a negative to be absurd. When looking at your research they will look at How many papers have you published? (and conferences, etc.) For a job at a research university there are four possibilities that collapse to two possibilities (This is an exaggeration, see next point.)
    1. If you teach a lot and do not have a lot of papers they will say Not enough papers, without caring why.
    2. If you teach a lot and have lots of papers they will say Enough papers, without caring why.
    3. If you teach very little do not have enough papers they will say Not Enough papers, without caring why.
    4. If you teach very little and have lots of papers they will say Enough papers, without caring why.
  2. If there is evidence that you are a good teacher this will be seen positively. How much they care will may vary tremendously, not just from school to school, but even within a school, from person to person. But to get a job at a research university you have to have done lots of research. Other things- teaching, service, patents, blogs, willingness to give faculty who can't drive rides home, are all secondary. Still, they can be important as tie breakers.
  3. Liberal arts colleges I am less familiar with so I welcome comments on this. You raise a good hypothetical question- if a BRILLIANT researcher who was a TERRIBLE teacher (there are such people!) were to apply to liberals arts college, would they get a job?
Those are just my opinions. What do you think?

Wednesday, April 27, 2011

Don't Blame the Tech

Short Announcements: STOC Poster Submission Deadline is Monday. Jaime Morgenstern set up a google groups page for students to find roommates for STOC, FCRC and other theory conferences, "particularly useful for women and students at smaller departments."




I usually agree with Moshe Vardi when he writes about conferences but not so much on his latest CACM editorial Technology Has Social Consequences. Moshe blames electronic program committee software for declining ethics of program committee members, increased PC service and more farming out papers to junior reviewers.

Moshe has a rosy memory of the past. I knew more than one PC member back in the day that would work on a paper submitted to a conference. The slowness of paper dissemination back then hid it better. Junior people and students always served as subreviewers, one professor even tried to run a course on reviewing FOCS papers.

What's changed is that computer science has grown and specialized. We have more conferences and each conference needs an ever larger PC to cover all the specialized subfields. Program committees have become unwieldy with ethics and quality reviews harder to enforce. These problems won't be avoided whether having an electronic or physical meeting. A strong PC chair makes more of a difference than the venue of a meeting.

Tuesday, April 26, 2011

Ravi Kannan wins the Knuth Prize

Ravi Kannan will receive the 2011 Knuth Prize for his algorithmic work, including approximating volumes of high-dimensional convex objects, computing the Frobenius number and finding low-rank approximations to matrices. The techniques Kannan helped develop, including sampling via random walks and the weak regularity lemma, have applications across the algorithmic spectrum.

Kannan will receive his award at the STOC conference at this year's FCRC meeting the week of June 5th. He will give the Knuth Prize lecture as an FCRC plenary speaker. With Les Valiant's Turing Award lecture on Sunday and Ravi Kannan's Knuth Prize lecture Tuesday, not to mention STOC (with the first STOC poster session), Complexity, EC and many other conferences, San Jose is the place to be in June. Early registration deadline is May 16.

Thursday, April 21, 2011

What did Banach's Wife think of the Banach-Tarski Paradox?

I recently read and reviewed The Pea and The Sun by Leonard Wapner, which is about the Banach-Tarski Paradox. Recall that the Banach-Tarski Paradox is actually a theorem (not a paradox) that says you can take apart a ball of volume 1 into a finite number of pieces (some of the pieces will be VERY funky) and reassemble to get a ball of volume 2. By repeating this process you can take a pea and get a ball the size of the sun. (My review is here).

When I told my wife this theorem (she has a Masters in Comp Sci and knows some math) she said Math is Broken!!!. She later wondered why mathematicians didn't just toss out The Axiom of Choice (for uncountable sets) since it leads to such an obviously false theorem.

By coincidence, Leonard Wapner had just emailed me to thank me for my review (and to agree that MacArthur Park is the worst song ever written, see this article on my website). SO I put the question to him- why didn't mathematicians just toss out AC since it leads to BT? What follows is an edited version of our back and fourth emails.

BILL: I told my wife the Banach-Tarski Paradox and she wonders why mathematicians didn't just IMMEDIATELY toss out the Axiom of choice for uncountable sets since BT is so obviously false.

LEONARD: I do not believe BT is obviously false. I believe it to be true, based on the axioms of set theory, including AC. Your wife is claiming there are no real world manifestations of the phenomenon. She is not alone with this feeling. Personally, I am highly skeptical of there being any real world models of the BT result. (Those given in the book were clearly labeled as speculations at best.

But, I also believe we can't rule out the possibility, based on mathematical and physical discoveries, to date. BT does not contradict any theorem or axiom. And, it's no more counterintuitive than some Baby BTs (mini version of BT that proceeded it that are also counter-intuitive) and some physical phenomena (relativity, quantum mechanics, etc.)

BILL: By the time BT was proven AC was embedded into the math culture. Many things had already been build on it. Is that why Math folks didn't just toss it out? Would anything important be lost if we tossed uncountable AC out?

LEONARD: I believe it wasn't tossed out because there was no mathematical justification (in the way of a mathematical contradiction) to do so. If AC were to produce a mathematical contradiction, then, being a questionable axiom, it might be tossed. But, there is no mathematical contradiction and I think most mathematicians accept AC. In any case, it would be even more counterintuitive for some to drop it than accept the BT result. AC allows for surprises, but no mathematical contradictions.

I suspect that there are useful theorems, relying on AC, which would be lost if AC were to be tossed. I can't give a specific example now, but I do know that there have been proofs relying on AC which have encouraged others to prove those same theorems without AC. BT, in its usual form, requires AC. Some variations of it do not.

BILL: BT is obv false in the real world. Is this enough of a reason to toss it out (my wife things so).

LEONARD: Coincidentally, my wife asks me the exact same question. And, this is precisely what intrigues me about BT and other paradoxes. As I write above, I prefer to say it is "apparently" false in the real world, rather than obviously false. I can't rule out that which I've not seen simply because I've not seen it.

Tuesday, April 19, 2011

Choosing an Undergrad School

It is the time of year in the US that high school students have found out what schools have accepted them and now have to decide where to spend the next four years. Maybe because I am of that age, but I find myself talking more to students and parents about what school they should choose.

So let's assume you are a high school student who knows they will eventually want to get a Ph.D. in computational complexity or perhaps some other math-related topic. Where to go to school? Depends much on your personality and your choices.

The Elites (examples: Princeton, Harvard, Stanford, Yale): Many rich kids who feel entitled so you can get good grades without working hard. But you can also take advantage of great professors and some challenging courses and research opportunities if you are up to the challenge.

Intense Schools (MIT, Caltech, U. Chicago): Here you have to work hard for your grades against other very strong students in challenging courses. You won't have a better math/science education anywhere else but these schools won't give you as broad a social experience.

Broad Private Schools (Cornell, Northwestern): Here I have biases having gone to Cornell undergrad and now teach at Northwestern. Fine math and science programs not quite as strong as the above but more than made up by experiencing an undergraduate life with smart students across a wide spectrum of disciplines. Many a theorist got their start as a Cornell undergrad.


Liberal Arts Schools (Williams, Wesleyan, Harvey Mudd): You don't get the large research programs but instead have very good profs who focus on undergrad teaching. This is the easiest way to get involved in research as an undergrad.


Big State Schools (Illinois, U. California, Michigan, Wisconsin): You get a real mix of students from athletes to partiers to really smart kids looking to save a few dollars. You can certainly get a great math and science education. But you'll have to work hard when many of your fellow students may not be.

General advice: Doesn't really matter that much where you go, as long as you work hard you will succeed. Above all enjoy your undergrad days for they will be the best times of your life.

Thursday, April 14, 2011

Going off topic in class: I think it worked--- this time

Recently I went off topic in a class. I think it was okay but I want YOUR thoughts.
  1. On Monday I
    1. defined Primitive Recursive functions
    2. showed them that addition, mult, exp are all primitive recursive,
    3. talked a little bit about TOWER and WOWER, both primitive recursive,
    4. constructed by diag a computable function that was not prim rec,
    5. noted that this function is not natural
    6. noted that there are natural computable functions that are not prime rec (NOTE that I noted this),
    7. made the point that I can do the same diag argument for ANY reasonable system of (total) computable functions.
  2. On Wednesday I was all set to start Turing Machines when a student named Amber said You promised to show us a natural example of a computable function that is not primitive recursive! I didn't recall exactly promising that but I very well might have and if a student cares about these things you want to nurture that curiosity.
  3. I told the class that I would toss out today's lesson plan and talk about this, but I would be informal and some of what I said might not be quite right, but would get the idea across.
  4. I asked one of the students who had his laptop up to stop surfing porn and look up Ackermann's function for me, so I could get the definition right.
  5. I talked about the Union-Find Data Structure that has amortized O(n INVERSE-ACK(n)) running time for n operations. I noted that this would not be an impressive use of the Ackermann's function if Amber could find an O(n) structure. I had the class vote on if Amber could do that. This was about fifty-fifty. I then told them that Amber could not, not because she is not a fine programmer, but because there is a lower bound proof that nobody can do better. That is, the lower bounds has been proven.
  6. I then talked about Goodstein Sequences which lead to computable functions that are not primitive recursive.
I went off topic. I am one lecture behind. Was it worth it?
  1. This class is not a prereq to anything else so I do not HAVE TO COVER everything.
  2. If I don't cover the topics on the syllabus then the guy who made up the syllabus, Gasarch, might get mad at me. This does not worry me.
  3. I doubt I will end up cutting anything important. I usually spend the last few lectures on misc topics (e.g., sparse sets cannot be NPC unless P=NP) which I can skip.
  4. They were interested in this. Not just Amber, but the whole class. That makes it worthwhile.
Misc:
  1. I told the linguistics major in the class that Ackermann's function has applications in linguistics. He believed me until I told him YOU"VE BEEN PUNK'D!!
  2. Ackerman's function gets 2,870,000 hits on Google. Ackermann's function gets 6,740,000 hits on Google. Does that make Ackermann correct? What if the wrong spelling got more hits?

Wednesday, April 13, 2011

Workshop/Award/Conference/Who wants to review books?

  1. The Center for intractability at Princeton is having a Workshop on Approximation Algorithms. Here is the schedule of talks.
  2. Vijay Vazirani, one of the organizers of the workshop, has won a Guggenheim CONGRATS!
  3. There will be a conference on Theory of Computation as a Lens on the Sciences. Is Theory of Computation a Lens on the Sciences? What does that even mean? Goto the conference and find out! Who is the person under the Lens in the picture at the left side of this link?. A Clue- The answer is RELATIVEly easy.
  4. Here are a list of books I need reviewed for my SIGACT NEWS column: HERE. To review a particular book email me at gasarch@cs.umd.edu. Before volunteering you should read my advice for reviewers. Also download a template for reviews either here as LaTeX or here as plaintext. I would like emails before April 25 which is when I put next column in final form. This way that columns list of books I want reviewed will be more accurate.
  5. When does complexityblog make announcements? It is sporadic and somewhat random (Kolmogorov random?). Hence you are encouraged to look at this link for announcements of most theory events.

Tuesday, April 12, 2011

Do You Know the Way to San Jose?

Lots of CS Goodness going on at FCRC June 4-11.

  • New for 2011 the STOC Poster Session: Share your exciting research on theory's greatest stage! Submission deadline May 2. Come and enjoy the posters and some refreshments Monday night. 
  • Les Valiant gives his Turing award lecture Sunday evening followed by the STOC reception.
  • Other great plenary speakers including Watson's David Ferrucci, reCAPTCHA's Luis von Ahn and theory's own Ravi Kannan. 
  • The ever fun STOC Business Meeting on Tuesday (hosted by yours truly) including the presentations of this year's Gödel and Knuth prizes.
  • There's also some conferences: STOC (Mon-Wed), Complexity (Wed-Fri), EC (Tutorials/Workshops Sun-Mon, Conference Tue-Thu), PODC (Mon-Wed), SPAA (Sat-Mon) and many more.
  • Act now if you need a visa or STOC student travel support.
Register today (or by May 16 to avoid late fees) and I'll see you all in San Jose!

Thursday, April 07, 2011

The Mathematics of Huging my great Niece Jordan

I have already blogged about (trying to) teach me Nephew Jason math here and my Great Nephew Justin math here. Now its my Great Niece Jordan's turn.

I was at a dinner with relatives including my 12 year old great niece Jordan. There were 10 people at the dinner.

BILL: Jordan, if everyone at this table hugged everyone else, how many hugs would there be?

JORDAN: If I get it right will you give me a hug?

BILL: I'll give you a hug in any case.

JORDAN: Okay. 10 times 10... so 100.

BILL: Can you hug yourself.

JORDAN: Sure (she then hugs herself).

BILL: For this problem lets assume you cannot hug yourself. Then how many.

JORDAN: Oh, that changes things. Its 10 times 9... so 90.

BILL: If I hug you and then you hug me, does that count as one hug or two?

JORDAN: Oh, that changes things. How do you do it?

BILL: Your answer of 90 counted BILL-HUGS-JORDAN and also JORDAN-HUGS-BILL. The same is true for every pair. So every pair was counted twice.

JORDAN: So... is the answer (9 times 10)/2 ... 45 ?

BILL: YES! Great. (They hug.)

JORDAN: You're not just a great uncle, you're an AWESOME Uncle!

BILL: And you're an AWESOME Niece!

Wednesday, April 06, 2011

Kanellakis and Grace Murray Hopper Prizes

The ACM announced several award winners today. Two of particular interest to the theory community.

Craig Gentry, recipient of the Grace Murray Hopper Award for his breakthrough construction of a fully homomorphic encryption scheme, which enables computations to be performed on encrypted data without unscrambling it.  This long-unsolved mathematical puzzle requires immense computational effort, but Gentry’s innovative approach broke the theoretical barrier to this puzzle by double encrypting the data in such a way that unavoidable errors could be removed without detection.  This insight has the potential to result in adaptable cryptography methods that can prevent security breaches and protect sensitive personal data.  Gentry is a researcher at IBM.  In 2009, he won the ACM Doctoral Dissertation Award.  The Hopper Award recognizes the outstanding young computer professional of the year. 


Kurt Mehlhorn, recipient of the Paris Kanellakis Theory and Practice Award for contributions to algorithm engineering that led to creation of the Library of Efficient Data Types and Algorithms (LEDA).  This software collection of data structures and algorithms, which Mehlhorn developed with Stefan Näher, provides practical solutions for problems that had previously impeded progress in computer graphics, computer-aided geometric design, scientific computation, and computational biology.  LEDA’s software has been incorporated in the applied research programs of thousands of companies worldwide in telecommunications, bioinformatics, Computer-Aided Design (CAD) and Geographic Information System (GIS), banking, optical products, and transportation.  Since 2001, LEDA has been developed and distributed by Algorithmic Solutions Software GmbH, founded by Mehlhorn with Näher and Christian Uhrig, who introduced a novel distribution model that is free to researchers and licensed to companies.  Mehlhorn is the founding director of the Max Planck Institute for Informatics and a professor at Saarland University in Saarbrucken, Germany.  A Fellow of ACM, he received the Gottfried Wilhelm Leibniz Prize in 1986, and the European Association for Theoretical Computer Science (EATCS) Award in 2010. The Kanellakis Award honors specific theoretical accomplishments that significantly affect the practice of computing.

Tuesday, April 05, 2011

A New Proof of the Nondeterministic Time Hierarchy


A nondeterministic time hierarchy was first proved by Cook and in the strongest form by Seiferas, Fischer and Meyer.  ZĂ k gave a simple proof that we sketched in this post. Here is another.

Theorem: If t1 and t2 are time-constructible functions and t1(n+1)=o(t2(n)) then NTIME(t1(n)) is strictly contained in NTIME(t2(n)).

Proof: 

Let M1,… be an enumeration of nondeterministic Turing machines. We define a nondeterministic machine M that acts as follows on input w=1i01m0y
  • If |y|<t1(i+m+2) then accept if Mi accepts both inputs 1i01m0y0 and 1i01m0y1 in t2(|w|) steps.
  • If |y|=t1(i+m+2) then accept if Mrejects input 1i01m0 on the computation path described by y.
This machine uses time O(t2(n)). If NTIME(t1(n))=NTIME(t2(n)) then there is an equivalent machine Mi using time O(t1(n)).
Since t1(n+1)=o(t2(n)) we have for sufficiently large m,



1i01m0 in L(M) ⇔ 1i01m0y in L(M) for |y|=1⇔ … ⇔ 1i01m01y in L(M) for |y|=t1(i+m+2)⇔ M(1i01m0) rejects on all computation paths y
a contradiction. QED

The advantage over ZĂ k is that you only need t1 steps instead of exponential in t1. On the other hand ZĂ k can give you a unary language and can be generalized to a broader set of complexity measures.

Rahul Santhanam and I needed and discovered this proof for our recent paper. The proof came out of a failed attempt at an oracle to show that no such relativized proof would be possible.

Friday, April 01, 2011

The Complexity of the Soul

A CS vision professor once told me "Of course we know there is an efficient algorithm for that humans can do it." Are we just nothing more than Turing machines running simple algorithms using machine learning techniques that have been hard wired into our brains through evolution. How sad.

But it's not true. I think therefore I am. I have self-awareness. A Turing machine can't be self-aware. There are people who try to formalize self-awareness and then show those formulations can be realized on Turing machines. But these don't match my intuitive notion of self-awareness so self-awareness cannot be formalized. There is something beyond computation that allows me to be self-aware, something called the soul.

I suspect you readers all have souls too but I can't prove it.

Does the soul violate the Church-Turing thesis? Does it allow us to compute things beyond that of a Turing machine? Does it allow us to compute problems much faster than a soulless computer could every do?

I think not. The soul is just another input to the Turing machine we call our brain. How the information gets from the soul to the brain is a process we may never fully understanding because reasoning about the soul requires the very soul we are trying to reason about.