Thursday, November 16, 2017

A Tale of Three Rankings

In the Spring of 2018 the US News and World Report should release their latest rankings of US graduate science programs including computer science. These are the most cited of the deluge of computer science rankings we see out there. The US News rankings have a long history and since they are reputation based they roughly correspond to how we see CS departments though some argue that reputation changes slowly with the quality of a department.

US News and World Report also has a new global ranking of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?

75% of the global ranking is based on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.

The CRA rightly put out a statement urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.

I've never been a fan of using metrics to rank CS departments but there is a relatively new site, Emery Berger's Computer Science Rankings, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.

Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.

As Goodhart's Law states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.

Monday, November 13, 2017

Can you measure which pangrams are natural

A Pangram is a sentence that contains every letter of the alphabet

The classic is:

                                      The quick brown fox jumps over the lazy dog.

(NOTE- I had `jumped' but a reader pointed out that there was no s, and that `jumps' is the correct word)

which is only 31 letters.

I could give a pointer to lists of such, but you can do that yourself.

My concern is:

a) are there any pangrams that have actually been uttered NOT in the context of `here is a pangram'

b) are there any that really could.

That is- which pangrams are natural?  I know this is an ill defined question.

Here are some candidates for natural pangrams

1) Pack my box with five dozen liquor jugs

2) Amazingly few discotheques provide jukeboxes

3) Watch Jeopardy! Alex Trebek's fun TV quiz game

4) Cwm fjord bank glyphs vext quiz
(Okay, maybe that one is not natural as it uses archaic words. It means
``Carved symbols in a mountain hollow on the bank of an inlet irritated an
eccentric person'  Could come up in real life. NOT. It uses every letter
exactly once.)

How can you measure how natural they are?

For the Jeopardy one I've shown it to people and asked them

``What is unusual about this new slogan for the show Jeopardy?''

and nobody gets it. more important- they believe it is the new slogan.

So I leave to the reader:

I) Are the other NATURAL pangrams?

II) How would you test naturalness of such?

Pinning down `natural' is hard. I did a guest post in 2004 before I was an official co-blogger, about when a problem (a set for us) is natural, for example the set all regular expressions with squaring (see here).

Thursday, November 09, 2017

Advice for the Advisor

A soon-to-be professor asked me recently if I could share some ideas on on how to advise students. I started to write some notes only to realize that I had already posted on the topic in 2006.
Have students work on problems that interest them not just you. I like to hand them a proceedings of a recent conference and have them skim abstracts to find papers they enjoy. However if they stray too far from your research interests, you will have a hard time pushing them in the right directions. And don't work on their problems unless they want you to.
Keep your students motivated. Meet with them on a regular basis. Encourage students to discuss their problems and other research questions with other students and faculty. Do your best to keep their spirits high if they have trouble proving theorems or are not getting their papers into conferences. Once they lose interest in theory they won't succeed.
Feel free to have them read papers, do some refereeing and reviewing, give talks on recent great papers. These are good skills for them to learn. But don't abuse them too much.
Make sure they learn that selling their research is as important as proving the theorems. Have them write the papers and make them rewrite until the paper properly motivates the work. Make them give practice talks before conferences and do not hold back on the criticism.
Some students will want to talk about some personal issues they have. Listen as a friend and give some suggestions without being condescending. But if they have a serious emotional crisis, you are not trained for that; point them to your university counseling services.
Once it becomes clear a student won't succeed working with you, or won't succeed as a theorist or won't succeed in graduate work, cut them loose. The hardest thing to do as an advisor is to tell a student, particular one that tries hard, that they should go do something else. It's much easier to just keep them on until they get frustrated and quit, but you do no one any favors that way.
Computer science evolves dramatically but the basic principles of advising don't. This advise pretty much works now as well as it did in 2006, in the 80's when I was in the student or even the 18th century. Good advising never goes out of style.

Of course I don't and can't hand out a physical proceedings to a student to skim. Instead I point to on-line proceedings but browsing just doesn't have the same feel.

Looking back I would add some additional advice. Push your students and encourage them to take risks with their research. If they aren't failing to solve their problems, they need to try harder problems. We too often define success by having your paper accepted into a conference. Better to have an impact on what others do.

Finally remember that advising doesn't stop at the defense. It is very much a parent-child relationship that continues long after graduation. Your legacy as a researcher will eventually come to an end. Your legacy as an advisor will live on through those you advise and their students and so on to eternity.

Monday, November 06, 2017

The two fears about technology- one correct, one incorrect


When the luddites smashed loom machines their supporters (including Lord Byron, Ada Lovelaces father) made two arguments in favor of the luddites (I am sure I am simplifying what they said):

  1. These machines are tossing people out of work NOW and this is BAD for THOSE people. In this assertion they were clearly correct. (`lets just retrain them' only goes so far).
  2. This is bad for mankind! Machines displacing people will lead to the collapse of civilization! Mankind will be far worse off because of technology. In this assertion I think they were incorrect. That is, I think civilization is better off now because of technology. (If you disagree leave an intelligent polite comment. Realize that just be leaving a comment you are using technology. That is NOT a counterargument. I don't think its even IRONY. Not sure what it is.) 
  3. (This third one is mine and its more of a question) If you take the human element out of things then bad things will happen. There was a TV show where a drone was going to be dropped on something but a HUMAN noticed there were red flowers on the car and deduced it was a wedding so it wasn't dropped. Yeah! But I can equally well see the opposite: a computer program notices things that indicate its not the target that a person would have missed. But of course that would make as interesting a story. More to the point- if we allow on computers to make decisions without the human elemnet, is that good or bad? For grad admissions does it get rid of bias or does it reinforce bias? (See the book Weapons of Math Destruction for an intelligent argument against using programs for, say, grad admissions and other far more important things.)
I suspect that the attitude above greeted every technology innovation. For AI there is a similar theme but with one more twist: The machines will eventually destroy us! Bill Gates and Steven Hawkings have expressed views along these lines.

When Deep Blue beat Kasparov in chess there were some articles about how this could be the end of mankind. That's just stupid. For a more modern article on some of the dangers of AI (some reasonable some not) see this article on watson.

It seems to me that AI can do some WELL DEFINED (e.g., chess) very well, and even some not-quite-so-well-defined things (Nat Lang translation) very well, but the notion that they will evolve to be `really intelligent' (not sure that is well defined) and think they are better than us and destroy us seems like bad science fiction (or good science fiction).

Watson can answer questions very very well, Medical diagnosis machines may well be much better than doctors. While this may be bad news for Ken Jennings and for doctors, I don't see it being bad for humanity in the long term. Will we one day look at the fears of AI and see that they were silly--- the machines did not, terminator-style, turn against us? I think so. And of course I hope so.
  


Thursday, November 02, 2017

Matching and Complexity

Given a group of people, can you pair them up so that each pair are Facebook friends with each other? This is the famous perfect matching problem. The complexity of matching has a rich history which got a little richer in the past few months.

For bipartite graphs (consider only friendships between men and women), we have had fast matching algorithms since the 1950's via augmenting paths. In the 1965 classic paper, Path, Trees and Flowers, Jack Edmonds gives a polynomial-time algorithm for matching on general graphs. This paper also laid out an argument for polynomial-time as efficient computation that would lead to the complexity class P (of P v NP fame).

After Razborov showed that the clique problem didn't have polynomial-size monotone circuits, his proof techniques also showed that matching didn't have polynomial-size monotone circuits and Raz and Wigderson show that matching requires exponential size and linear depth. Because of Edmond's algorithm matching does have polynomial-size circuits in general. NOTs are very powerful.

Can one solve matching in parallel, say the class NC (Nick's class after Pippenger) of problems computable by a polynomial number of processors in polylogarithmic time? Karp, Upfal and Wigderson give a randomized NC algorithm for matching. Mulmuley, Vazirani and Vazirani prove an isolation lemma that allows a randomized reduction of matching to the determinant. Howard Karloff exhibited a Las Vegas parallel algorithm, i.e., never makes a mistake and runs in expected polylogarithmic time.

Can one remove the randomness? An NC algorithm for matching remains elusive but this year brought two nice results in that direction. Ola Svensson and Jakub Tarnawski give a quasi-NC algorithm for general graph matching. Quasi-NC means a quasipolynomial (2polylog) number of processors. Nima Anari and Vijay Vazirani give an NC algorithm for matching on planar graphs.

Matching is up there with primality, factoring, connectivity, graph isomorphism, satsfiability and the permanent as fixed algorithm problems that have played such a large role in helping us understand complexity. Thanks matching problem and may you find NC nirvana in the near future.

Tuesday, October 31, 2017

The k=1 case is FUN, the k=2 case is fun, the k\ge 3 case is... you decide.

 (All of the math in this post is in here.)


The following problem can be given as a FUN recreational problem to HS students or even younger: (I am sure that many of you already know it but my point is how to present it to HS students and perhaps even younger.)

Alice will say all  but ONE of the elements of {1,...,1010}in some order.

Bob listens with the goal of figuring out the number. Bob cannot possibly store 1010 numbers in his head. Help Bob out by giving him an algorithm which will not make his head explode.

This is an easy and fun puzzle. The answer is in the writeup  I point to above.

The following variant is a bit harder but a bright HS student could get it: Same problem except that Alice leaves out TWO numbers.

The following variant is prob more appropriate for a HS math competition than for a FUN gathering of HS students: Same problem except that Alice leaves out THREE numbers.

The following variant may be easier because its harder: Alice leaves out k numbers, k a constant. Might be easier then the k=3 case since the solver knows to NOT use properties of 3.

I find it interesting that the k=1, k=2, and k≥ 3 cases are on different levels of hardness.  I would like a more HS answer to the k≥ 3 case.

Thursday, October 26, 2017

2017 Fall Jobs Post

You're finishing up grad school or a postdoc and ask yourself what should I do for the rest of my life? We can't answer that for you but we can help you figure out your options in the annual fall jobs post. We focus mostly on the academic jobs. You could work in industry but there's nothing like choosing your own research directions and working directly with students and taking pride in their success.

For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. AcademicKeys also lists a number of CS jobs. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world and in particular many have postdoc positions to offer.

The computer science job market remains hot with most CS departments trying to hire multiple faculty. Many realize the importance of having a strong theory group, but it doesn't hurt if you can tie your research to priority areas like big data, machine learning and security.

Remember in your research statement, your talk and your interview you need to sell yourself as a computer scientist, not just a theorist. Show interest in other research areas and, especially in your 1-on-1 meetings, find potential ways to collaborate. Make the faculty in the department want you as a colleagues not just someone hiding out proving theorems.

Good luck to all on the market and can't wait for our Spring 2017 jobs post to see where you all end up.

Monday, October 23, 2017

Open: PROVE the pumping and reductions can't prove every non-reg lang non-reg.

Whenever I post on regular langs, whatever aspect I am looking at, I get a comment telling me that we should stop proving the pumping lemma (and often ask me to stop talking about it) and have our students prove things not regular by either the myhill-nerode theorem or by kolm complexity. I agree with these thoughts pedagogically but I am curious:

Is there a non-reg lang L such that you CANNOT prove L non-reg via pumping and reductions?

There are many pumping theorems (one of which is iff so you could use it on all non-reg but you wouldn't want to-- its in the paper pointed to later). I'll pick the most powerful Pumping Lemma that I can imagine teaching a class of ugrads:

If L is regular then there exists n0  such that for all w∈ L, |w| ≥ n0  and all prefixes x' of w

with |w|-|x'| ≥ n0  there exists x,y,z such that

|x| ≤ n0         

y is nonempty

w=x'xyz

for all i ≥ 0 x'xyiz   ∈ L

If this is all we could use then the question is silly: just take

{ w : number of a's NOT EQUAL number of b's }

which is not regular but satisfies the pumping lemma above. SO I also allow closure properties. I define (and this differs from my last post--- I thank my readers, some of whom emailed me, for help in clarifying the question)

A ≤ B

if there exists a function f such that if f(A) = B then A regular implies B regular

(e.g., f(A) = A ∩ a*b* )

(CORRECTION: Should be B Regular --> A regular. Paul Beame pointed this out in the comments.)

(CORRECTION- My definition does not work. I need something like what one of the commenters suggested and what I had in a prior blog post. Let CL be a closure function if for all A, if A is
regular than CL(A) is regular. Like f(A) = A cap a*b*.  Want a^nb^n \le numb a's = numb b's
via f(A) = A cap a*b*. So want A \le B if there is a closure function f with f(B) = A. )


A set B is Easily proven non-reg if either

a) B does not satisfy the pumping lemma, or

b) there exists a set A that does not satisfy the pumping lemma such that A ≤ B.

OPEN QUESTION (at least open for me, I am hoping someone out there knows the answer)

Is there a language that is not regular but NOT easily proven non-reg?




Ehrenfeucht, Parikh, Rozenberg in a paper Pumping Lemmas for Regular Sets (I could not find the official version but I found the Tech Report on line: here. Ask your grandparents what a Tech report is. Or see this post: here) from Lance about Tech Reports) proved an iff pumping lemma. They gave as their motivating example an uncountable number of languages that could not  be proved non-regular even with a rather fancy pumping lemma. But there lang CAN be easily proven non-reg. I describe that here. (This is the same paper that proves and iff Pumping Lemma. It uses Ramsey Theory so I should like it. Oh well.)

SO, I looked around for candidates for non-reg languages that could not be easily proven non-regular. The following were candidates but I unfortunately(?) found ways to prove them non-regular using PL and Closure (I found the ways by asking some bright undergraduates, to give credit- Aaron George did these.)

{ aibj : i and j are relatively prime }

{xxRw : x,w nonempty }  where R is Reverse.

I leave it to the reader to prove these are easily proven non-regular.

To re-iterate my original question: Find a non-reg lang that is not easily proven non-reg.

Side Question- my definition of reduction seems a bit odd in that I am defining it the way I want it to turn out. Could poly-Turing reduction have been defined as A ≤ B iff if A is in P then B is in P? Is that equivalent to the usual definition? Can I get a more natural definition for my regular reductions?