Friday, March 30, 2012

The Value of an Academic Publication

Russell O'Connor's paper was accepted into last years ACM SIGPLAN Workshop on Generic Programming. Russell put the final version of his paper on the ArXiv under a public domain dedication. Russell couldn't transfer author's rights to the ACM since he gave them up. After much discussion the ACM decided not to publish the paper in the proceedings. The abstract in the proceedings states
We note that one of the papers presented in the workshop is not included in the proceedings. This paper, "Functor is to Lens as Applicative is to Biplate: Introducing Multiplate" by Russell O'Connor, is accessible as arXiv:1103.2841v2 [cs.PL].
Russell gives his account but he focuses on ArXiv instead of the public domain aspect. In the STOC CFP we encourage putting your submissions on ArXiv and similar sites. The issue that worried ACM was the loss of rights. ACM could have published the paper, it was in the public domain, but it wouldn't have control of that publication and didn't want to set precedent. Scott Delman of the ACM responded here.

An academic paper you write has little direct value to you (the paper itself not the Intellectual Property within). No one is likely to give you any money for that paper. But your paper does have financial value as part of a collection in a journal or conference proceedings. Commercial and non-profit publishers know how to collect on this value. You might complain that this puts your paper behind a firewall but all the major publishers in CS allow you to post earlier drafts on your homepage and archive sites. As long as you make that effort, people will have access to your papers.

It does take some money (or considerable person hours) to maintain even an electronic journal or conference proceedings. But publishers get more value than that. For the ACM, the DL revenue is a major source of funding for ACM activities and those of the SIGs.

Of course you should never trust the opinion of someone who has a financial interest in a position and as SIGACT chair, we certainly make use of the DL revenue. We are hoarding some in case the DL revenue shrinks in the future.

It seems a shame to leave the monetary value of our papers on the table but also that value shouldn't be exploited. Big discussions will continue on the publications issue, at ACM and other publishers and at all levels of the CS community. There will be a Dagstuhl workshop focused on this topic in the fall. Figuring out the right model for publications will not be an easy one.

My biggest fear is that lack of a plan will lead to a degradation in the quality of our publications and then everyone loses.

Thursday, March 29, 2012

Sanjeev Arora wins ACM-Infosys Award

Sanjeev Arora will receive the 2011 ACM-Infosys Foundation Award, the highest honor ACM gives to a mid-career scientist.
Sanjeev Arora is one of the architects of the Probabilistically Checkable Proofs (PCP) theorem, which revolutionized our understanding of complexity and the approximability of NP-hard problems. He helped create new approximation algorithms for fundamental optimization problems such as the Sparsest Cuts problem and the Euclidean Travelling Salesman problem, and contributed to the development of semi-definite programming as a practical algorithmic tool. He has played a pivotal role in some of the deepest and most influential results in theoretical computer science, and continues to inspire colleagues and new generations of researchers.
Congratulations to Sanjeev!

In other news, the IEEE Computer Society W. Wallace McDowell Award is being awarded to Ron Fagin, and the US gives a big push for big data (CCC blog has details).

Another reminder for upcoming deadlines: STOC Posters (3/31), ACM Turing Student Travel (4/2), STOC Student Travel (4/4) and FOCS papers (4/4).

Tuesday, March 27, 2012

"Math was a mistake- I made it too hard"

(REMINDER- IF you are a STUDENT who wants to GOTO STOC 2012 but needs money to go then you should GOTO the STOC 2012 homepage and click on Travel Support. Deadline to apply: April 4.)

In the movie Oh God Book II God, played by George Burns, says
Math was a mistake, I made it too hard
Or at least I thought he said it. I have repeated this quote both in the blog and as a comment on Scott's blog. As I noted on my blog, if you Google
"Math was a mistake, I made it too hard"
all of the hits that you get lead back to me. This is still true. (Though it won't be after this post goes up.)

I thought it was a great quote so I am surprised it is not better known. FINALLY Oh God Part II came TV so I watched it just to see if what I've been quoting all these years is really in the movie. (Also, its a pretty good movie, for what it is.)

Alas, that is NOT the quote! What God really said was
"Mathematics, that was a mistake. I should have made the whole thing a little easier"
I think my version is better.

There are other misquoted quotes. My favorite: Stalin never said
The capitalists will sell us the rope with which we will hang them.
He did say:
The capitalists will furnish credits which will serve us for the support of the Communist Party in their countries and, by supplying us materials and technical equipment which we lack, will restore our military industry necessary for our future attacks against our supplier. To put it in other words, they will work on the preparations of their own suicide. (The Yale Book of Quotations (2006))
Both for George Burns and Stalin I am troubled- are we better off using the pithy version of the quotes, which DOES capture what they meant, or the original?

We have a similar issue when we teach- do we teach math the way it was invented or discovered (messy but motivated) or the way it is understood now (clean but unmotivated). Hopefully its not an either-or question and we can do some of both.

Monday, March 26, 2012

What is an Elegant Proof?

What is an elegant proof? I do not know and I doubt it can be well defined; however, we all know it when we see it. I welcome comments on the topic; however, I will give a very simple example for a contrast of elegant and non-elegant. All of the math discussed here informally is done formally here.

Consider the following theorem:
There is no 2-digits number that is the sum of the squares of its digits.
One crude measure of elegance is to minimize the number of numbers that you need to check directly are not the sum of the squares of their digits. With this in mind. With this in mind, here are several proof sketches.
  1. One could proof this by enumerating all possible 2-digit numbers. If you wrote a program for this then you could use it on similar problems. Even so, I suspect most of us would call this proof NOT ELEGANT. This requires 89 CHECKS.
  2. There is a proof that first easily eliminates 10, 20, 30, 40, 50, 60, 70, 80, 90 and then looks at every interval [11,19], [21,29], ..., [91,99]. More elegant than enumeration and certainly shorter. This required 8 CHECKS.
  3. There is a proof that looks at the equation 10a + b = a2 + b2. We look at it mod 2, mod 4, and mod 10. This gives what I thought was the most elegant proof; however, after writing it down carefully it was about the same length as the interval proof. This required 2 CHECKS.
Consider the following theorem:
There is no x ≥ 2 that is the sum of the squares of its digits.
The cases of 2 ≤ x ≤ 9 and x ≥ 100 can be done with zero checks, so using the best proof of the last theorem, we can do this theorem with 2 checks.

Consider the following theorem.
The only 3-digits number that is the sum of the cubes of its digits is 153. (NOTE ADDED LATER: This is INCORRECT. A commenter pointed out that 370, 371, 407 also work. I will fix the proof and statement later and see if these are the only ones. Score a point for the `do it by a computer enumeration' argument!)
I have a proof of this which is... not quite elegant but not brute force. I get it down to only 21 CHECKS. If you have a better proof in terms of NUMBER OF CHECKS that is not contrived to reduce NUMBER OF CHECKS I would be very interested to see it. Of course, I cannot define contrived rigorously or elegantly.

Friday, March 23, 2012

David Waltz (1943-2012)

David Waltz, head of the Center for Computational Learning Systems at Columbia, passed away yesterday after a battle with a brain tumor at the age of 68.

David Waltz is best known for his research in artificial intelligence but I'll remember him most for his leadership of the NEC Research Institute when I was there. David fought hard and risked his career to protect basic research, particularly theory, at NEC. He would end up losing the presidency of NEC in this conflict. It is a great leader that is willing to risk all to support his people.

Thursday, March 22, 2012

A Busy Time of The Year

It's spring break week at Northwestern so life is supposed to be quiet. No such luck.

Endre Szemerédi will receive the 2012 Abel Prize, perhaps the most prestigious annual award in mathematics. Tim Gowers has a nice summary of his work.

With this tweet the head of Yahoo Research officially leaves for Google.
This likely marks the beginning of the end for what was a great corporate research environment.

FOCS submission deadline is April 4. There are some minor changes in the submission format from last year.

Registration for the ACM Turing Centenary Event is full but students can still attend through a limited number of ACM SIGACT Student Scholarships.

Registration for STOC is now live. The final program will be posted soon.
Even as I write this post, breaking news that U. Illinois president Michael Hogan resigned. Can't wait to see what Jeff has to say about this.

Tuesday, March 20, 2012

Heading South

Starting in July, I'll be chair of the School of Computer Science in the College of Computing at Georgia Tech. Annie Antón from NC State will chair the School of Interactive Computing. Here's the official announcement. I'm truly excited to be working with Annie, the Dean of Computing Zvi Galil and the incredible faculty there to move Georgia Tech forward.

I can anticipate many of your questions: How does moving from the land of Lincoln to the land of Lipton affect the blog? How do I get a job at Georgia Tech? Wait, does this mean there is a theory opening at Northwestern? Is this really the fourth job you've had since starting the blog ten years ago? All in due time.

Monday, March 19, 2012

Intel Science Talent Search

Last week I went to the Intel Science Talent Search Awards Ceremony in DC, probably the most prestigious math and science competition for American high school students.

I mentored one of the finalists, Adam Kalinich, of the Illinois Math and Science Academy. Adam studied poset games, where each player takes turns picking an element x of a finite poset and removes all y ≥ x. First one to empty the poset wins. The complexity of deciding who wins a poset game is wide open. Adam showed how to convert a game where one players wins to a game where the other wins, a surprisingly tricky task. His paper appeared in IPL (also on ArXiv).

Lots of math and computer science among the finalists and winners. The other Illinois finalist, Jordan Cutler, worked on practical implementations of quantum cryptography with Prem Kumar, another professor in my department. Jordan, who is a cousin of complexity theorist Steve Homer, came in 10th place.

Anirudh Prabhu had the coolest math result on perfect numbers. It's been conjectured that there are no odd perfect numbers. Anirudh showed a non-constant lower bound for odd perfect numbers (if they exist) as a function of the number of factors. Only constant lower bounds were known before. Anirudh came in 7th place.

David Ding got 4th place for his work on representation theory of Cherednik algebras. I don't know what those are either.

First place went to Nitin Tumma for work related to cancer.

The ceremony itself was a great scene. I'm a sucker for the pomp and circumstance. Walter Isaacson gave the keynote address and we all got autographed copies of his biography of Steve Jobs. Great fun was had by all.

I've seen the future of American science and it is awesome.

Friday, March 16, 2012

Judea Pearl wins Turing Award

(In this age of lightenting fast communication I suspect you all already know that Judea Pearl won the Turing Award. Even so, it is worth posting about.)

Judea Pearl has won the 2011 Turing award (given in 2012). (see here for the announcement). He is not a theorist; however, he did champion the use of probability and graph theory in AI.

Strong AI is the attempt to make computers match or exceed human intelligence. People in Strong AI might see how humans do things and try to get a program to mimic that. Putting aside philosophy (are we all just rather complicated DFA's?) the goal of Strong AI seems to just be hard to do even if it is possible (complexity issues?)

Weak AI has more modest goals (and a much shorter Wikipedia Page)- lets get a computer that can do a well defined task (e.g., Medical Diagnosis) well. They want to get things to work and may very well NOT take how humans do it as an inspiration. Do humans do the kinds of probabilistic calculations that Judea Pearl works with? I tend to doubt it.

Once Strong AI produces real results it is called Weak AI. Once Weak AI produces real world products its called something else (Robotics, Nat. Lang Proc, Vision, there are other examples).

I ask all of the following nonrhetorically. NONE of the questions are meant to question the award.

Did Judea Pearl work in Strong AI to Weak AI?

Was Judea Pearl one of the first people to incorporate serious AI on a more rigorous foundation?

Does AI use serious math? (I know that Control theory does, though is that AI?)

Did Judea Pearl (or his group) build software that is actually being used someplace?

What is the criteria for good work in AI?

Who might be the next AI Turing Award Winner?

Tuesday, March 13, 2012

How do legit fields of knowledge decide between competing theories? How does Astrology?

Euler was born April 15, 1707. Hence by the Western Astrology that we all ignore he is an Aries. However, I recently heard about another way (also worth ignoring) of doing the signs where he is a Pisces (see here). The other system has 13 signs. (There are some other systems, all worth ignoring, here.) For a serious article about what astrology really claims to say and why its worth ignoring see here.

What does a field of study do if there are several competing theories?
  1. The Natural Sciences: I would like to think that the truth wins out... eventually. There may be struggles and politics and whatnot but in the very end experiments are performed and the more predictive theory wins out (I know its more complicated then that.) There are some fields where it is hard to do experiments (e.g., String Theory). Others where the theory gives great explanatory power but might be hard to do direct experiments (Evolution). Those cases may be hard to deal with, but they manage. Does String Theory have great explanatory power?
  2. Mathematics: Here the question is not WHAT IS TRUE since we can use proofs (Again, I know its more complicated than that) but WHAT IS WORTH STUDYING. Applied Math may still use the real world for a litmus test to some extend. More abstract kinds of math may be harder to test by looking at the real world. Do Large Cardinals exist? Is the Axiom of Determinacy true? Some people claim that they look for internal consistency and also intuitions. And some people live-and-let-live (you want to use AC, fine, I won't) (Again, I know its more complicated than that.)
  3. Astrology : What does a field do when its predictions are either vague or no better than chance? If there was a well designed experiment to test which theory has more predicative value then I suspect they would all be found wanting. (Then again, I'm a Sagittarius and we are known to be skeptical.) So what other criteria could they use? Internal Consistency and Intuition? Whichever one makes people feel better (some astrologers might claim that they are really psychologists, telling people what they want to hear to sooth them). Whichever sells more books? Makes more money? Is there some sort of aesthetics involved? I ask this nonrhetorically. (An idea for a scam: The old astrology doesn't work because they don't take into account relativistic effects on the planets. Use my Relativistic Astrology! For people who like what they believe in to have some buzz words from science thrown in!)
This does raise the general question- if there are competing theories in a field where its hard or impossible to do experiments, what does a field do? Whoever yells loudest wins?

Monday, March 12, 2012

March Madness

Once again, America's favorite binary tree, the NCAA Men's National Championship Bracket. The tree seems to get more unbalanced every year. There are four regions, the East has the traditional 16 teams, the West and South have 18 teams each and the Midwest has 19 teams for a total of 68 teams. Single elimination starts tomorrow.

Many Americans participate in office pools where they fill out the bracket to predict which team wins each game. There's lots of math one can use for your bracket. But here is my advice: Pick the higher seeded teams to beat the lower seeded teams. Every time. There will be upsets but you can't predict where they will be so you have the best chances predicting none of them.

The most likely seeds to make the final four are two number 1's, a number 2 and a number 3, 3 times in the last 27 years. So why follow my advice which predicts four number 1's. Because there are 12 ways to get 11 2 3 and only one way to get 1 1 1 1 and you have to pick a specific combination when you fill out your bracket. The four first seeds all went to the final four only once in the last 27 years but given there's only one such combination that makes it more likely than any other possibility. 

May the madness begin.

Friday, March 09, 2012

Scott wins the Waterman

The NSF's most prestigious prize, the Alan T. Waterman award, recognizes an outstanding young scientist (35 or under) in any field of science or engineering. Breaking with tradition this year the NSF picked not one but two winners, computer scientists Scott Aaronson (MIT) and Robert Wood (Harvard).

Most of you readers know Scott well. He's already an established leader in quantum computing and computational complexity and has his own awesome blog. Not sure why NSF decided to use a photo of Scott proving Karp-Lipton wearing a skirt. The CCC blog post points to Scott's TedxCaltech talk. I'll just link to the podcast I had with Scott back in 2005.

Computational complexity has won two Watermans in the last three years as Subhash Khot received the award in 2010.

Robert Wood is the principal investigator of the RoboBees project, which is pretty much as the title suggests, insect-inspired robots. Sweet revenge for being called the number one example of government waste by Sean Hannity.

Wednesday, March 07, 2012

When a+b is harder than b+a

Is 1+4 a harder calculation than 4+1? It may be if you are 2+3 years old. I asked my 7-2 year old great niece Noelle the following sequence of questions. I include how long it took her to answer.

Bill: How old are you?

Noelle: (2 seconds) 5

Bill: What is 3+2

Noelle: (3 seconds) 5

Bill: What is 2+3

Noelle: (2 seconds) 5

Bill: What is 4+1

Noelle: (1 seconds) 5

Bill: What is 1+4

Noelle: (8 seconds) 5

Bill: What is 7 take away 2

Noelle: (4 seconds) 5

Bill: What is the least d such that there is general dth-degree equation.

Noelle: (11 years) 5

Bill: What is the least k such that the kth Ramsey Number is not known

Noelle: (21 years) 5

The most interesting of these to me was that 1+4 took much longer than 4+1. Why is this? To do 1+4 she starts at 1 and adds 1 to it four times so its 1 + 1+ 1+ 1+ 1. To do 4+1 she starts with 4 and adds 1 once, 4+1. More generally, if we don't use the addition is commutative and we view +1 our basic operation than the complexity of a+b is b.

It is important to realize that concepts such as commutativity of addition which are now obvious to us as adults, there was a time when it was not obvious. Or perhaps not obviously useful for calculations.

Could a model of children's addition be defined and studied? Would this be a Math Project, a Math Ed Project, or a Child-Development Project? Has it already been done? I suspect that how children learn things has been studying extensively, but that well defined questions of complexity-of-children's-addition has not. My ONE data point suggests that the complexity of a+b is b, but to really study this you would of course need more samples. SO, if any of you relatives that are 5 or under (but can talk), and try this out, let me know what you find.

Monday, March 05, 2012

The Internet of the Present

On this blog we rarely get non-spam comments on posts more than a few days old. Sometimes I can bring up a topic I had posted on just a few months ago and no one will notice. When people said they enjoyed my blog I used to ask them what posts they liked. I would just get a blank stare. I don't ask anymore.

In theory you can look at our old posts through the "Blog Archive" section on the left column (if you are reading this on the blog website). I doubt anyone actually does. Occasionally people get to old posts via Google searches but I've come to the realization that most posts I wrote more than a few weeks ago will never be read again.

That's too bad. Many of them are still quite relevant. But we live in the present. I'm just as guilty as everyone else. Sometimes I'll discover a great new blog and subscribe to its posts. But I'll never go back and read the old posts.

Blogs seem to be going out of style. Twitter don't even try, one cannot easily see someone's old tweets, and any brilliant tweet I make will expire in usefulness in a just a few hours. Google+ is similar. Interestingly Facebook with their new Timeline makes it possible to explore someone's early posts. The past remains there just in case someone cares.

I have no great insights or solutions to this problem. But what does it matter. A week from now you'll forget this post even existed. 

Friday, March 02, 2012

Turing's Titanic Machine!

In the March CACM, Barry Cooper writes
We quote Peter J. Denning introducing the ACM Ubiquity Symposium on "What is Computation?" as saying: "Researchers in biology and physics have claimed the discovery of natural computational processes that have nothing to do with computers."
With Lance Fortnow distinctly underwhelmed: "Some people outside of computer science might think that there is a serious debate about the nature of computation. There isn't."
As often happens when experts disagree, the truth lies somewhere in between.
No it doesn't. My extreme point of view: A strong belief in the Church-Turing thesis that Turing machine captures the true notion of computation now and forever.

What's next? Casting doubt on 1+1=2? Sure no one has yet proved 1+1=3 but that doesn't mean it won't happen someday.