Tuesday, March 31, 2009

Submit and Forget

The FOCS deadline is fast approaching, this Thursday at 6:59 PM Eastern. 

I have a tradition of submit and forget. After I submit a paper to a conference I put it aside and don't think about the paper again until we get back the decision. In recent years I have broken that strict rule by sending the submission to an archive site like ECCC but in general I don't work on or even think about a paper while the program committee does its thing. No use fretting about the paper while the committee decides so best to keep busy with other research and activities.

No such luck for this year's FOCS. Umesh explains why we have to write a 2-page follow-up a week later. Lots of bloggers have weighed in, check out Jeff and everyone he links to. My main take from Umesh's letter: Authors cannot be trusted to give the proper motivations, explanations and examples in the initial write-up. And that's a shame.

The 2-pager makes bad writing self-fulfilling. Why bother with proper motivation at all in the first stage if you get to work on it later?

Monday, March 30, 2009

Change we can believe in!

Two issues have been brought up in some blogs lately that I want to comment on. I have opinions on the first two but a strong opinion on a meta-issue.
  1. Should conferences have double-blind referereing? This was brought up by ***SORELLE*** who thinks that double-blind is good, and later followed up by ***LANCE*** who thinks that double-blind is not needed. I think conferences should have it to avoid bias and perceived bias, so I agree with ***SORELLE***. But that is not the point I want to make.
  2. Should PC members be allowed to submit? This was discussed in this blog. 10 years ago I thought NO but over time I've changed my mind---it seems counterproductive to not allow some of the best people in our field to submit. But that is not the point I want to make.
  3. Should these issues be brought up, discussed, with an actual chance of being CHANGED at various business meetings? I STRONGLY think so. Whenever these items are brought up they are shouted down without any real debate. (They are rarely brought up in the first place.) I imagine the following story: Alice goes to CRYPTO and someone brings up stopping doing double-blind submissions. Alice is among the people who shout it down giving the usual arguments against the change, but the real argument is because we've always done it this way. Later in the year Alice goes to STOC. Someone brings up having double-blind submissions. Alice is among the people who shout it down giving the usual arguments against the change, but the real argument is because we've always done it this way.
CRYPTO does double-blind submissions. COLT allows people on the committee to submit. STOC and FOCS do neither of these. Any of these could change what they do and use the others as a model of how to do it.

SO, my strong opinion is that these possible changes should be considered seriously. There must be some change we can believe in! Even if it goes a way I disagree with (e.g., COLT no longer allowing PC members to submit) I would be happy to see that change is possible.

Friday, March 27, 2009

Voting on the Web- Beware Colbert

What do The Hungarian Government, NASA, and my nephew have in common? (this is not a joke) They all lack an understanding of voting on the web.
  1. In 2006 The Hungarian Government set up a website to take a vote on what to name a particular Bridge in Hungary. Stephen Colbert urged his viewers to vote for The Stephen Colbert Bridge. (At the time The Chuck Norris Bridge was leading in the vote.) The Government overturned it since Colbert is not Hungarian and he is not dead. They should have NOT allowed writeins OR not have an election.
  2. In 2009 NASA set up a website to have a vote on what to name a room on the international space station. They already had two of the three rooms named: Unity and Harmony. I've read that they were hoping the last one would be Serenity- claiming that they wanted world peace to be a theme, though I think they were fans of the TV show Firefly. Once again Stephen Colbert urged his viewers to vote for Colbert. And this name won. NASA could overturn this (they will decide in April). I've read on websites statements like Stephen Colbert embarrassed NASA and NASA made the mistake of allowing writeins. However, if they just go with Colbert and say they are happy with it this will be far less embarrassing then overturning the vote. I hope the don't overturn it. But if they do then why have an election in the first place? They can say It was only a Poll but it sure looks like a election to me. But, to be fair, they have not overturned it yet so I can't be mad... yet.
  3. When my nephew's wife was pregnant my nephew set up a website for people to vote on the name of the baby. No, Stephen Colbert did not urge his viewers to vote for Stephen as first name and Colbert as middle name. And my nephew did not allow writeins. But they didn't end up using the top vote getter! This upset the family terribly if by the family you mean Bill Gasarch and by upset you mean told them that they destroyed democracy. They named the kid George even though Al had won (this is a joke, though the rest of the story is true).
Hungary was just naive in a transitional time for things like these. NASA has less of an excuse. Both did not realize that you cannot control votes on the web. And my nephew did not realize that once you hold an election, if you overturn your results, you might get your uncle annoyed.

Thursday, March 26, 2009

To Adopt or Not to Adopt

I decided to take the plunge and set up a Facebook page for my intro theory course that starts next week. Told someone and got the following "Facebook is so 2005. You should be using Twitter instead." I am not even sure what Twittering a course would mean.

A few weeks ago I was talking to a fellow professor, also married with children, who didn't have a cell phone. I tried to explain how having cell phones transformed our lives, no longer did we have to coordinate in advance. The other professor seemed unconvinced. It's very hard to convince someone who doesn't use a technology why one cannot live without it.

Of course I didn't tell him the story of when our family split up in a museum and then the entire AT&T network went down in the state. It took us 45 minutes after the museum closed to find each other.

So far I have avoided Twitter, either following others or writing my own, even though I already have three followers to my empty Twitter feed. I actually worry that I don't have enough time for another tool I can't live without. 


Wednesday, March 25, 2009

Return to Berkeley

A couple of weeks ago I returned to the UC Berkeley campus for the first time since my not so memorable first year of grad school. I went for an informal workshop on algorithmic randomness, as part of a 12 PI NSF project on the topic. We met in the math department at Evans Hall, which housed computer science back in the 80's. I discovered the west side offices of Evans have tremendous views of the San Francisco bay including both main bridges. I don't remember any theory faculty having west side views back in the day.

The biggest difference I noticed was walking down the streets and not once being accosted by people asking me for money. I wondered what happened to all the homeless in Berkeley and found them during an early morning jog, large numbers of them sleeping outside of the student union. Walking by there a few hours later the homeless had disappeared replaced by a long line of students waiting for I don't know, perhaps concert tickets of some sort.

Circumstances caused me to cut my trip short before I had a chance to visit the CS department or really explore the city and university. But I'll be back and it hopefully won't take me another 23 years to return.

Tuesday, March 24, 2009

Community

Noam Nisan, the Complexity theorist turned E-conomist, started a new blog Algorithmic Game Theory. Since, as Noam says, the Computational Complexity blog occasionally writes about algorithmic game theory, I hope the AGT blog can also occasionally talk about complexity.

Which brings me to talking about communities. We as academics belong to many different communities. First based on employment, I have my research group, the EECS department, the Engineering school and all of Northwestern university (not to mention my courtesy/adjunct positions). But also based on research, first among my research colleagues and then computational complexity, theoretical computer science, computer science and the greater scientific and general academic world we live in.

All of these communities share the same basic mission: To develop, inform and educate on cutting-edge research. But when we have limited resources (funding, jobs, students, slots at conferences, people's attention), the priorities of different communities can come into conflict. At every level, we have to fight for our fair slice of the pie while at the same time working together to make the pie bigger.

This is the Computational Complexity weblog and I don't hide my research biases. But this I am also a theoretical computer scientist, a computer scientist, a scientist and an academic. Balancing these communities is a challenge for all of us.

I worry most about the CS and TCS communities. Already when I started graduate studies, computer science already became mostly a collection of separate nearly disjoint research organizations lacking for example a major general CS conference. Since then I have seen theoretical computer science go along the same path. No longer does it seem we keep close track of what is happening or even who the major researchers are outside our own specialized areas. 

Remember the roles we must play in the broader academic communities, the ones that go beyond the research that we follow and pursue. For as the famous Ben Franklin quote goes "We must all hang together, or assuredly we shall all hang separately."

Monday, March 23, 2009

The Putnam Exam: Some Thoughts

The Putnam Exam is a Math Competition which began in 1938. The current form is to have 6 problems in a 3-hour morning session and 6 problems in a 3-hour afternoon session. (Some of the older exams have 7 problems or have you pick 6 out of 7). Some observations.
  1. I went through all of the exams and classified which problems were in physics and which were in combinatorics (to confirm some suspicions that I had- see next two points). The results are in this document. I am sure that if you were to do a similar classification you might disagree with me on some problems, but our tallies would not differ by much. (You can find the problems from 1985 until now here. Most years the problems and their solutions appear in the The American Mathematics Monthly.)
  2. The older exams asked more about physics. To be more precise the exams have had 18 problem on physics: 14 of them before 1960, and only 4 of them since then (1973,1974,1975,1981). Why this trend? I suspect that before 1960 it was just assumed that a math major knew basic classical mechanics--- now it's not as universal an assumption. Any other ideas?
  3. Combinatorics has been a relatively recent popular topic for problems. There have been 32 problems in Combinatorics. 21 since 1978. In my lifetime I have seen combinatorics become more part of the ugrad curriculum, so this may be the reason.
  4. In 1953 one of the problems was to show (in today's terms) that if you 2-color the edges of K6 then there will be a monochromatic triangle. This is now so well known that I doubt they would ask it.
  5. What is better for a young math major to do: study for competitions like the Putnam exam, or do a research project? Both are certainly good. I've asked around and got the following responses.
    1. Several students have told me that taking the exam got them interested in some parts of math they had not heard of, so then they wanted to (and did) do research.
    2. Some other students told me that the exam is good since its a finite well defined goal that they can focus on. By contrast, for a younger (perhaps immature) student doing research is uncomfortably vague.
    3. Another student likened math competition people to people who know lots of words for SCRABBLE but don't know what they mean. I disagree. To understand answers to old exams and to generate answers to new exams you need to understand some real math. (I think that in World Champion Scrabble they should only give you 1/2 of the points if you don't know the meaning of the word.)
    4. There are many Putnam winners who went on to become research mathematicians.
    5. There are many Putnam winners who did not go on to become research mathematicians. I've heard it said of some of them that they could do a problem given to them but not come up with problems on their own. I am skeptical of this as an explanation since there are all kinds of people who and good and bad at all kinds of things. Certainly being a good problem solver does not decrease your problem-creation ability.
  6. Personal note: I took the Putnam exam three times. My best score was a 33 (basically 3 problems right out of 12) in 1980. For one of the problems I felt a bit odd getting it right. I had a year long course in combinatorics (rare in those days) so I knew how to do it from knowledge not cleverness. How good is a 33? Respectable, but not worth putting in my essay to grad school. It was 125th in the country (I think out of around 2000) and my school (SUNY Stonybrook) gave me a copy of Godel-Escher-Bach since it was the highest score at the school. Joel Spencer proctored the exam and finished it, I suspect with a perfect score, in half the time.

Thursday, March 19, 2009

Which Awards are well known? Why?

The ACM announced some awards which you can get to here: here

What is more prestigious?
  1. The Fields Medal (established 1936, around $15,000). (Also see Wikipedia Entry.) (Note that the IMU also gives out the Rolf Nevanlinna Prize (as of 1982) and the Carl Friedrich Gauss Prize for Applications of Mathematics (as of 2006). The webpage indicates they are on par with the Fields Medal.)
  2. The Turing Award (established 1966, around $250,000). (Also see Wikipedia Entry.)
  3. The Milennium prize problems (established 2000, $1,000,000 per problem). (Also see Wikipedia Entry.)
  4. The King Faisal International prize (established 1976, around $200,000). (Also see Wikipedia Entry.)
You may be wondering What is the King Faisal International prize? It is a prize sponsored by the King Faisal Foundation, which wikipedia says is one of the largest philanthroic founcations in the world. They give awards in five areas: (1) Service to Islam, (2) Islamic Studies, (3) Arabic Language and literature, (4) Medicine, (5) Science. The Science award has gone to people in Math several times.

My impression is that this award is not as well known as the Fields Medal or Turing Award. (Commenters- agree? disagree?) Why is this? More generally, what makes an award prestigious?
  1. Money: But the Fields Medal, at $15,000, is still more prestigious then the King Faisal international prize.
  2. Time: The Fields Medal and the Turing Award were established longer ago. At the time there weren't that many other prizes out there to compete with.
  3. Focus: The King Faisal international award is for both Islamic Studies of various sorts and for science of various sorts. This makes it may make it harder to report on and talk about. The Fields Medal and the Turing award are focuses. Milennium Prize even more so.
  4. Quality of the Winners: A quick glance at the people who have won the King Faisal International prize for Science who did mathematics shows that they made excellent choices. So this is not the issue.
Is there a money/time/focus tradeoff? If I prove it then can I win a Fields Medal? (NO-I'm over 40. Oh well.) Consider that the Millennium Prize problems has only been around since the year 2000, BUT is far more money and far more focused. So it confers prestige.

Which would you rather win: A Fields Medal or a King Faisal International Prize for Science? Whats more important, Money or Prestige? Or perhaps if you win the Faisal award your winning will make it better known. A better chance of hitting the big money is to go on DEAL OR NO DEAL.

Wednesday, March 18, 2009

The Best or the Worst

Do you judge a conference by its best or its worst? I don't mean the absolute best paper, any conference can get lucky. I don't mean the absolute worst, every PC makes mistakes. But do you judge a conference more by the top quarter of its submissions or its bottom quarter?

Ideally you should care about the best papers. You only have time and energy to go to a fraction of the talks anyway so you can just skip the weaker papers. You'll learn the most from the best papers in the conference.

But how do you distinguish the best from the worst? The program committees purposely don't releases any ranking information of the accepted papers beyond a few award winners. So if the worst papers are pretty good this gives a lower bound on any talk you attend. 

But the real answer lies in the fact that we don't care about conferences because of the papers that we want to see but for what it does for our papers if they appear there. If the worst papers are very good and your paper gets accepted this implies some quality level about your paper. And in the end we want conferences that make our papers look good. And so we focus more on the worst than the best. Yet another paradox of the CS conference system.

Tuesday, March 17, 2009

The Madhu Move

After weeks of rumors, I now have confirmation from the Microsoft website and the man himself. MIT theory prof Madhu Sudan has accepted a permanent research position at Microsoft New England. In standard practice, Madhu is keeping his options open by officially taking a one-year leave from MIT in case the move doesn't work out.

Madhu is one of the best researchers of his generation having done fundamental work in PCPs and coding theory for which he received the 2002 Nevanlinna award. He has had a number of extraordinary Ph.D.s and from what I hear an excellent teacher. Even though Madhu is just moving a few blocks physically, this marks a great gain for Microsoft and a loss for MIT and academics.

I won't comment on why Madhu is leaving MIT but I have a couple of other thoughts:
  • With all of my non-academic friends worried about their jobs in our current financial situation, I feel quite safe with my tenured position. To give up a tenure in this economy seems risky though someone like Madhu could always find a job.
  • Nice to see Microsoft willing to make a commitment with a new hire of a pure theorist. 
  • MIT, where I got my Ph.D., has seen the loss of few great theorists over the past couple of years: Dan Spielman, Ronitt Rubinfeld and Santosh Vempala as well as Madhu. Nevertheless MIT has hired some strong junior people such as Scott Aaronson and Jon Kelner and remains an amazingly strong place in theory, though the field is leveling.


Monday, March 16, 2009

Math on the Simpsons Last Night

Last night on The Simpsons they had a math problem. Homer solved it! Since Jeff Westbrook (PhD from Tarjan) is one of the writers, and I've heard they have other writers with a math bend, I'm not too surprised that they had a math problem. I am surprised that Homer solved it. Its a problem you've probably heard in other forms:
Homer has with him Baby Maggie, his dog, and a jar of poisons that look like delicious candy. (Homer: Oh, why did I take my baby and my dog with me when I went to buy poison! And why do the poison has to look so delicious!?) He needs to get all three across the river. He can only take one of them at a time in his rowboat. He can't leave Maggie with the poison. He can't leave the dog with the poison. (He CAN leave Maggie and the dog.) How does he do this?


The Simpsons has had lots of math on it. There is a webpage of all math on The Simpsons here, though it doesn't have last night's episode yet (quite reasonable--- if you expect that then you expect to much from the digital age).

How does The Simpsons compares to Numb3rs in terms of the accuracy of the math presented? I suspect The Simpsons is more accurate; but, to be fair, they don't have to find some goofy math way to solve a crime every week. The math on Numb3rs, goofy as it sometimes is, does connect to some math of interest, see this web page.

Friday, March 13, 2009

Skimming Journals

Many journals have an email service where they will send you a table of contents each issue so you can keep up with what is in the journal, see for example Theory of Computing. I find these services invaluable as otherwise I wouldn't remember to check each journal and I like seeing what papers each journal has. Many of the societies and professional publishers require that you have a digital subscription to get the emails, supposedly an "added extra". But these journals lose the chance to advertise their papers to non-subscribers by making these email services unavailable for them.

In the last couple of weeks somehow I have been involuntarily subscribed to some email contents of a few lesser known journals. I'm not sure whether someone hopes I mention these journals on the blog, or is subscribing many theorists, or is just playing a big practical joke (don't get any ideas). But please don't do it. Feel free to send me a single email asking me if I want to subscribe. But if I get subscribed where I have to opt out instead of opting in, I will opt out every time.

Thursday, March 12, 2009

Math Solves Problems

Caught the following commercial the other day.

I spent the entire commercial trying to guess what company was paying for this. Should have recognized the IBM blue.

Consider the first two lines:

Math is the only language all human beings share.
Math can better predict financial markets…
One should never use "only" when bragging. One can always find counterexamples.

Perhaps these days IBM should have left out the part about "financial markets". Though the New York Times Tuesday gave the credit (actually mostly blame) to the physicists.

Wednesday, March 11, 2009

Barbara Liskov Wins Turing Award!

The news that
Barbara Liskov has won the 2008 Turing Award!
you have probably already seen in email or on ***SORELLE***'s BLOG Its already in Wikipedia's list of Turing Award winners. So why am I bothering to blog about it? Because I've already gotten 5 emails asking why I haven't blogged on it. Besides, there may be someone who didn't know it so this blog is doing that person a service. Below is an edited version of what ACM send me (and probably you) about this.

ACM has named Barbara Liskov the recipient of the 2008 ACM A.M. Turing Award for her contributions to practical and theoretical foundations of programming language and system design, especially related to data abstraction, fault tolerance, and distributed computing.

Liskov revolutionized the programming field with groundbreaking research that underpins virtually every modern computer application for both consumers and businesses. Her achievements in programming language design have made software more reliable and easier to maintain. They are now the basis of every important programming language since 1975, including Ada, C++, Java, and C#.

Liskov heads the Programming Methodology Group in the Computer Science and Artificial Intelligence Laboratory at MIT, where she has conducted research and has been a professor since 1972.

The ACM A.M. Turing Award is ACM's most prestigious technical award. It recognizes contributions of lasting and major technical importance, and honors individuals whose work has advanced the field of computing. First presented in 1966, and named for British mathematician Alan M. Turing, the Turing Award is widely considered to be the Nobel Prize in Computing. It carries a $250,000 prize, with financial support provided by Intel Corporation and Google Inc.

For more on THIS YEARS Award winner see the ACM Press Release.

For more on THE TURING AWARD see the Wikipedia Entry which also includes a list of all of the winners.

Tuesday, March 10, 2009

Watching the Watchmen

Watchmen was the most popular movie in the US last week but you wouldn't know it from the nearly empty suburban Chicago theater I went to Friday night to see it. The graphic novel was one of the highlights of the comic-book reading days of the 80's. I forgot most of the details in the over 20 years since I read the graphic novel and I purposely didn't go back so the movie would seem fresh.

Zach Synder did a good job of taking much (but not all) of the graphic novel into a 160 minute movie that never dragged. I enjoyed the movie but it didn't seem fresh since Snyder set it in the 1980's, an alternate 80's with Nixon still president, but the 80's nevertheless. At one point Ozymandias sits in front of a bank of TVs showing shows and commercials (Where's the Beef?) from my distant past.

But how would it play to those younger than me who can't remember the cold war, Vietnam and Nixon and the real threat of all out nuclear war between the US and Russia? After all the cold war ended in a much better way in reality than it did in the movie. It was a mistake not to put the movie in present day (perhaps with a 98-year old Ronald Reagan as president) and use our fears of terrorism as the catylyst for the events in the movie. The reference to current times came from the twin towers of the World Trade Center prominently found in the New York skyline shots in the movie.

Watchmen the graphic novel worked because it played on the issues of its day. Watchmen the movie tells the the story of a time distant in my memory and it just doesn't work as well.

Monday, March 09, 2009

Lets Congradulate Gerard Huet! (who?)

Note the following:
The EATCS Award is awarded in recognition of a distinguished career in theoretical computer science. The Committee, consisting of Catuscia Palamidessi (Chair), Paul Spirakis and Emo Welzl in charge of evaluating the nominations to the 2009 EATCS Award has come to the decision to propose Gerard Huet as the candidate for the 2009 EATCS Award in view of the excellent research contributions to theoretical computer science produced throughout his outstanding scientific career. The Committee unanimously shares the motivations contained in the nomination letter. The proposal has been unanimously approved by the EATCS Council. The Award will be assigned during a ceremony that will take place in Rhodes (Greece) during ICALP (July 5-12, 2009).
I have never heard of this guy, which says alot more about my view of theory (narrow) then about his accomplishments (great--- see later in this post). It may also show that the term theoretical computer science covers a rather large area. Also, I think his area of research is more popular in Europe then America. However, I'm not trying to make a statement about what theory should or should not be, I am trying to get us educated--- below is more about him (not from me, from the EATCS site). You can also goto here

Gerard Huet has made numerous, enduring advances in the foundations of computer science and has been a central figure in several important software systems. He has also had a remarkably active and successful academic and professional life. His distinguished career in theoretical computer science has exerted considerable influence on not only the field but also the many students and colleagues that he has directly influenced.

A hallmark of Huet research is his talent for taking highly technical material and providing it with a clear and deep analysis. For example, in his paper on the unification of typed lambda-terms (in the first volume of TCS, 1975), Huet took a difficult topic, reshaped and redefined it, and left a solution so well developed that it took more than 15 years of active research before any one saw a need to extend it. Since that first major result, he has repeated this performance numerous times.

Equally characteristic of Huet's research is the intimate connections he maintains between theoretical computer science topics and their effective implementation. He was one of the first computer scientists who was able to move between these two domains and who felt that there was no option to doing so: these two topics were absolutely needed to inform each other.

Huet was a leader in the general areas of logical frameworks and constructive typed theories. Thanks in large part to his achievements and efforts, formal mathematics has taken huge strides and is starting to have an impact on the wider world. He has worked extensively at disseminating his view of this bold new world of formalized reasoning: for example, he has organized numerous summer schools and given countless invited talks and tutorials. Many researchers in France and elsewhere count themselves as students of Huet even if they were never formally his PhD advisee.

We list and briefly describe some of the many contributions made by Gerard Huet.
  1. n the early 70's, Huet developed both resolution and unification for higher-order logic: these results have became the core of several modern systems that perform deduction in higher-order logic.
  2. Huet has done fundamental research in the areas of rewriting and Knuth-Bendix completion. His writings in this area are extensive and elegant.
  3. Between 1982 and 1989, Huet directed and contributed to the design and implementation of the CAML functional programming language. That language and its descendants have given academics and industries an efficient and well structured programming language.
  4. In the 1990s, Huet and his students designed and built the first version of the Coq proof assistant. Today, Coq is one of the most used and trusted platforms for formalized mathematics and formal methods.
  5. Huet has a broad culture inside and outside of computer science. He has made, for example, important contributions in other areas as well: he is the author of executable lecture notes; he is the designer of the Zipper data structure; and he has built the Zen toolbox for phonological and morphological segmentation and labeling of Sanskrit.

Friday, March 06, 2009

Addresses

I have long since lost count of all the phone numbers in my life. Every time I have moved we get a new number. Even when I haven't moved, we have had our area code changed on us more than once. When I first arrived in Chicago in 1989 there one area code (312) for the entire Chicagoland area. Now there are seven area codes (312,773,224,847,630,331 and 708) and I've had phone numbers in five of them.

There used to be talk of running out of phone numbers in the US between multiple-lines at home and work, fax machines, cell phones, etc. But not so much anymore. Why? Because of my daughter.

My daughter has a cell phone number. It could likely be the only phone number she ever has. She may never get a home phone. Fax machines, already becoming obsolete, will likely go the way of the typewriter by the time she grows up. I wouldn't be surprised if she never gets a separate office phone line either.

My daughter has an email address. It could also be her primary email for the rest of her life.

My daughter has a postal address. She has had many postal addresses and will have several more. Why not just have the USPS computers map some number (like her cell phone) to her current physical address which she can enter in a database. In a world where we are moving from identities pointing to people instead of locations, why should the old postal system be left behind?

Thursday, March 05, 2009

Seeking an EASIER proof of s x s^2 theorem

Consider the following theorem:
For any finite coloring of NxN there exists a square such that all corners are the same color.
There are several proofs of this:
  1. This can be proven from the Hales-Jewitt theorem. (Advantage: automatically get out the full Gallai-Witt theorem with this approach.)
  2. This can be proven from the Gallai-Witt theorem trivially.
  3. This can be proven by using VDW theorem on the diagonal.
  4. This can be proven by using VDW theorem on the side.
  5. This can be proven by directly. It has a VDW flavor to it but can be shown to HS students who have not seen VDW theorem (I've done it).
The following theorem can be proven from the polynomial Hales-Jewitt Theorem:
For any finite coloring of NxN there exists an s &ge 2 and an s by s2 rectangle such that all corners are the same color.
I have tried to prove this from scratch, from Poly vdw theorem, from Gallai-Witt, from ordinary Hales-Jewitt. Have not managed it.
  1. Is there a proof either from scratch, from poly vdw, or from Ordinary HJ? If so then please comment.
  2. Normally one asks for an elementary or (I prefer the phrase) purely combinatorial. Even so, I can't rephrase my question with this term since there is a purely combinatorial proof of Poly Hales-Jewitt. (See either Walters paper or the rough draft of the book on VDW stuff I am co-authoring with Andy Parrish pages 77-89.)

Wednesday, March 04, 2009

How much does it cost to produce an online Journal? Do I hear $0.00?

***SORELLE***'s post pointed me to a post on Dense Outliers that suggests a free online Comp Geom Journal. The post wanted people to vote on whether this is a good idea.
  1. It wasn't quite a vote- it was YES if you thought there should be a free online CG journal AND you were strongly committed to it (I suppose willing to be editor or author). NO if you thought it was a bad idea, or NO BUT SOMEONE ELSE SHOULD (a good idea but you are not commited to it). I am so far removed from the community that I couldn't even vote NO BUT. (I will browse through ***SORELLE***'s Comp Geom Proceedings. The probability that I find an article I care about is 1/3--- every third year there is some article on Geometric Ramsey Theory.)
  2. Reading the post and talking to ***SORELLE*** my impression is that the poster and ***SORELLE*** think running an online journal costs $0.00. The reasoning goes that authors, referees, editors, all work for free. If the authors do their own typesetting and there is no print version then the cost should be $0.00. Is this true?
  3. A while back Elsevier had the editoiral board of Information and Computation (which I was on) meet in Boston to discuss the journal. From what they said I honestly believe that running a journal does cost money. How much is a fair question, but it does cost something. Running the computers where its all stored, maintaing stuff, does cost money. But I am not an expert on these things and I would like a commenter who is to comment on this.
  4. One obvious cost---Elsevier paid for our flights, hotel, and dinner at a really good resturant.
  5. There are three free on-line journals in math/tcs that I know of Electronic Journal of Combinatorics, Theory of Computing, and Chicago Journal of Theoretical Computer Science. If someone knows how they are doing, do they cost money to run, and if so how they get it, please comment. Also, are there others? How are they doing? Do they need money? Where does it come from?
  6. One advantage of an online journal, which Electronic Journal of Combinatorics has, is dynamic survey articles. That is, these are survey's that are updated over time. (My favorite: Ramsey Theory Applications. And no, I didn't write it.)

Tuesday, March 03, 2009

Algorithms for All

Last week I visited Edinburgh. Nice new CS building in an ancient university.

On Tuesday before a lunch talk, someone mentioned a new article the day before in the Guardian, a British paper, about the glory of algorithms with some concern that the article had misled people into thinking algorithms came from mathematics (the article ends with "Mathematicians rule!"). To me any publicity is good publicity but the good faculty at Edinburgh felt CS needed defending. To set the matter straight they decided to write a letter to the Guardian. Things moved very quickly, especially for a British university, and the letter was written, signed by many faculty members and sent by the end of the day.

I mentioned the Guardian letter and the controversy briefly in my Wednesday post.

What happened next I learned from Suresh. The Edinburgh letter was published Friday along with another letter from the Operations Research Society claiming algorithms as their own.

So who owns algorithms? We all do. Roughly CS analyzes algorithms and OR applies algorithms to real-world problems, all often based on important mathematical ideas and theorems. Algorithmic ideas now permeate many other areas of study such as physics, biology and economics. Algorithms rule!

Monday, March 02, 2009

You Can't Separate the Art from the Artist

Sorelle blogged about double-blind reviewing, removing author's names from papers during the review process. Michael and Suresh picked up the topic and so I will also add my two cents, supporting Sorelle's point 6 that the authors should be taken into account (a point Sorelle doesn't agree with). My views are only worth two cents—the best papers will nearly always be accepted, the worst rejected so we are only tweaking the edges here. 

Certainly we have biases towards the authors and in the committees I have participated in the we are quite conscious about the authors and often bring them up when discussing the merits and demerits of each paper. But we should. For one, an author is a measure of trust. Take a lengthy detailed proof about PCPs for instance. You should trust the proof more if it comes from someone like Johan Hastad or Irit Dinur who have histories of dealing with long technical proofs more than someone who is not as detail-focused (like me). But it is much more than that.

A good paper is more than just a theorem and proof. We don't read a paper just to see if a certain theorem is true and if the proof is logically valid. That's just a mechanical check. No, we enjoy the paper because the result and proof have some beauty to it, a clever a new idea or a different way of looking at things. A good paper is a work of art.

Art is not judged in isolation. You don't read a book or movie review that doesn't connect to previous work of the author or director. The price of a painting is greatly affected by the artist who painted it. You choose which symphony or opera to attend more for the composer than the particular piece. 

And so it should also be with mathematical papers. I'm not saying we should just accept papers based solely on the authors. No, you must judge the particular work, but you must judge it in its full context and that includes how the paper connects to the authors earlier work, how the work fits in with related work and what kind of theme it follows.

Double-blind reviewing removes critical information, taking away the soul of the paper, the creative forces behind it. History will judge a person's research career through his or her publications so we shouldn't ask that the PC make decisions ignoring the context given from knowing the authors.