Wednesday, December 31, 2008

2008 Complexity Year in Review

Paper of the year goes to Optimal algorithms and inapproximability results for every CSP? by U. Washington grad student Prasad Raghavendra. The paper gives a semidefinite program approximating any constraint satisfaction problem, always the best possible approximation if the unique games conjecture holds.

Some other notables: Aaronson-Wigderson's Algebrization, Raz's counterexample to the strong parallel repetition theorem and many others. Another good year for Complexity.

Despite the dozens of "proofs" that have come my way, the P vs. NP problem remains open. Maybe next year.

NSF funding for theoretical computer science are at the highest levels since I started this blog. Princeton hosts a new NSF sponsored Center for Computational Intractability and Microsoft opened a new research lab in Cambridge, Mass. The theoretical computer science landscape looks quite strong.

However the current economic crisis will pose great challenges to computer science and all of academics as university budgets tighten. Great research can happen in awful economies—computability theory got its start during the great depression for example. But we all suffer if we lose a generation of theorists to this economy.

We remember David Gale, Nick Reingold, Ingo Wegener and others who inspired our lives, George Carlin, Arthur C. Clarke, Michael Crichton, Bobby Fischer and Randy Pausch.

We thank guest posters Richard Beigel, Manuel Bodirsky, Rance Cleavland, Iftah Gamzu, Evan Golub, Nicole Immorlica, Kamal Jain, Joe Kruskal, James Lee, Richard Matthew McCutchen, Amir Michail, Ketan Mulmuley, Vahan Mkrtchyan, Kirk Pruhs, Ken Regan, Rahul Santhanam, Uzi Vishkin, Philipp Woelfel and Jens Zumbraegel for their contributions to our blog this year.

A personal thanks to Bill Gasarch, not only for allowing me to come back to the blog and staying on as co-blogger, but also for organizing a great Computational Complexity Conference at Maryland.

I will always have many great memories for 2008. I started a new job at Northwestern in January and it was a great first year. My eldest daughter became a teenager and had her Bat Mitzvah in May. And after an entertaining election season, our junior senator is just weeks away from becoming the first African-American president of these United States of America. An exciting year indeed.

Wednesday, December 24, 2008

The Rooster

My daughters came up with the following joke.
Two roosters sat on on a barn. The first rooster asked the second what time he was supposed to crow. The second rooster replied "in the morning".

At 12:01 AM, the first rooster screamed "Cockle-Doodle-Do" waking up the farmer and his family. The second rooster said "Why are you doing that now?"

The first rooster said, "I looked up 'morning' in Wikipedia and it said morning starts at midnight."

The moral of the story: Don't trust Wikipedia.

After coming up with the joke, my daughters wanted to change the Wikipedia entry for morning to make the joke funnier but I talked them out of it. I always have a hard time convincing them that Wikipedia is usually quite trustworthy when their teachers at school keep pounding on how unreliable Wikipedia can be.

Enjoy the holidays. We'll be back next week with the Complexity Year in Review.

Tuesday, December 23, 2008

Springer Open Choice

First some administrative stuff. Posting will be light until the New Years. The comments feed is not working properly—Blogger is working on it. I deleted a spam comment from my spam post before I realized the comment was a joke. Sorry.

We just finished up an article for the Springer journal Theory of Computing Systems. If you go to the article's web page you'll see the paper is open access, you can download it from anywhere. How did that happen?

In the final stages of production, Springer offered us the opportunity to have our paper in their new Open Choice plan. For a fee we could have our paper downloadable with a license similar to that used by Creative Commons for non-commercial use.

That fee would be $3000, far too much than I'm willing to pay to let you download the paper from Springer when you can download a version from my home page for free.

But Springer has deals with some institutions, mostly Dutch, to offer Open Choice for all their papers. And one of my co-authors is Dutch. So enjoy!

Monday, December 22, 2008

No Good Deed Goes Unpunished

From the Ethicist in yesterday's New York Times Magazine.
Another college professor and I wanted a student to work on a project of ours, but she is a Mac user and the project required a program written for the PC. My colleague's husband, a Ph.D. in computer science, suggested an alternative program the student could download. She did; her computer froze and would not restart. The husband refused to assist her further. As the spouse of a colleague and the person whose advice caused the problem, isn't he obligated to help?
Randy Cohen, the Ethicist, gave a wishy-washy response that the husband doesn't have to help but he should.

Ignoring the whole Mac/PC issues, why is "a Ph.D. in computer science" relevant to the story? We don't get the fields of the questioner or the colleague.

I certainly would stop giving advice if every time I recommended a program there was an implicit promise of additional support. Though my Ph.D. is in Applied Math so the situation above doesn't apply to me.

Reminds me of some good advice I got as an undergrad when I was working as a programmer for Cornell computer services. Some person called me by mistake asking for some technical support and I helped him out. But then this person kept calling. My officemate suggested I give him some wrong but harmless advice. The caller thought I was an idiot and never called again. Problem solved.

Friday, December 19, 2008

Predicability in Movies: was there ever a case where...

This post was inspired by HALF of the movie FLIGHTPLAN.

I have watched HALF of the movie FLIGHTPLAN. In the movie Kyle Pratt (played by Jodie Foster- I didn't know that know that Kyle could be a girls name) takes her 6 years old kid on an airplane, the kid disapears, and the Flight Manifest said she was never there. So the captain and others think that Kyle is nuts.

Having not seen the rest of the movie yet I wonder which is true: (1) Kyle Foster IS nuts, or (2) there is some weird conspiracy going on. Gee, I wonder which one it will be!

In the real world I would of course assume (1). But since its a movie I absolutely know that (2) will be the case. The only point of suspense for me is will the final explanation make sense?.

SO, here is my question: Do you recall ever seeing a movie or TV show where it is clear that EITHER (1) the main character is nuts, or (2) there is a massive conspiracy. and it ends up being (1)? (I don't count if it ends up being a dream.)

YES, I know that if the main character was just nuts the story might not be as interesting. However, they should have the main character be nuts once in a while so that its not so predicable. That way when watching these things we would have a genuine question and genuine suspense. As for making it interesting- thats their job!

Thursday, December 18, 2008

Have We Solved Spam?

The amount of spam in my spam folder has dropped below fifty per day down from several hundred a few months ago. The amount of spam that gets through Gmail's filters to my inbox is well under one a day. I'm not sure why. Better spam filters, better enforcement, less economic incentives for spam, maybe even the bad economy is playing a role. While not completely solved, email spam is no longer a significant problem for me.

This has happened without you having to pay me, solve a CAPTCHA or have your computer solve some hard problem to send me email. I have a mailto link on my home page (albeit generated by javascript code) and my various email addresses show up in plaintext on hundreds of pages scattered over the Internet.

For those of you who insist on making it painful to send you email (anything beyond a simple click or cut-and-paste without further editing), time to change your ways and not make your fear of spam make extra work for those of us who simply want to say hello, and maybe ask you to review a paper.

Wednesday, December 17, 2008

TEACH EVERYONE PROGRAMMING! (Guest Post)

(Guest Post by Amir Michail)

The case for teaching everyone programming

If everyone could program, then anyone could more easily and cheaply scale his/her world view to reach millions in one-on-one conversations to, for example, have greater impact on the results of an election.

If everyone could program, then anyone can more cheaply achieve immortality by encoding what they would like others to remember about them into a simulation of themselves.

Is it ethical to deny people such capabilities? Moreover, is it ethical to deny people the chance to dream up and create entirely new sorts of programs to enhance their lives?

But I hear you say that programming is a difficult skill that only a few could learn. Not really. Learning programming is easier than learning reading and writing, basic arithmetic, or playing a musical instrument. Programming at a professional level is hard, but so is writing a novel, proving theorems, or playing in an orchestra.

Nonetheless, we need to start thinking about ways to make programming more accessible to the vast majority of the population. This might mean taking the math out of programming and using languages that are not Turing-complete. Chatbot programming is one such example.

Tuesday, December 16, 2008

Prediction Markets Review

Yesterday the Electoral College delegates voted, 365 for Barack Obama and 173 for John McCain. How did the markets do?

To compare, here is my map the night before the election and the final results. The leaning category had Obama at 364. The markets leaned the wrong way for Missouri and Indiana, their 11 electoral votes canceling each other out. The extra vote for Obama came from a quirk in Nebraska that the Intrade markets didn't cover: Nebraska splits their votes based on congressional delegations, one of which went to Obama.

Indiana and Missouri were the most likely Republican and Democratic states to switch sides according to the markets, which mean the markets did very well this year again. Had every state leaned the right way (again), one would wonder if the probabilities in each state had any meaning beyond being above or below 50%.

Many argue the markets just followed the predictions based on polls like Nate Silver's fivethirtyeight.com. True to a point, Silver did amazingly well and the markets smartly trusted him. But the markets also did very well in 2004 without Silver. One can aggregate polls and other information using hours upon hours of analysis or one can just trust the markets to get essentially equally good results with little effort.

In some other prediction market news: A couple of years ago, Tradesports spun off their non-sports markets into a new site Intrade. Last month Tradesports shut down. Intrade still lives and hopefully it will continue to provide an exciting collection of political and other prediction markets.

Cantor Fitzgerald, owners of the Hollywood Stock Exchange, have filed with the CFTC to create real money markets based on box office receipts. Each share will pay off one-millionth of the first four weeks of the film's gross. They don't call it a prediction market since it doesn't have a binary outcome but I do because it is based on a specific time-limited event and the price should accurate reflect the expected gross of the movie.

Not market related but the big news in Illinois is of course the embarrassment of our governor right after the Obama high we've been on. At least Blagojevich will, hopefully, fade from view soon while Obama should leave a strong legacy as president that we in Illinois will remember for a long time.

Monday, December 15, 2008

A NIM-game and some possibly open problems

I made up the following problem which was on the Univ of MD Math Competition for HS Students Part II. (It was question 3 of 5, indicating intermediary difficulty.) It leads to other questions that may be open and/or interested. I state it differently then it appeared so I can more easily state the other questions.
Let A&sube N - {0}. Let n&isin N. NIM(A,n) is the following game: There are initially n stones on the table. Players I and II alternate removing stones from the table. They can remove x stones iff x &isin A. The first player who can't move loses. Let SQ be the set of squares. Show that there exists an infinite number of n such that Player II wins NIM(SQ,n).
  1. Most people who read this blog should be able to do this problem. The solution is at the link above.
  2. Let p &isin Z[x]. Let Ap = { p(x) : x &isin Z, p(x)>0 } It is easy to show that there are an infinite number of n such that player II wins NIM(Ap,n).
  3. Let POW2 be the set of powers of 2. Let A=N-POW2. It is easy to show that there are only a finite number of n such that player II wins NIM(A,n).
  4. OPEN: classify exactly which A are such that there are an infinite number of n such that Player II wins NIM(A,n). (Note- it may not have a nice classification.)

Friday, December 12, 2008

Request for Grant Reviewers from NSF

(Guest post from Richard Beigel, NSF Program Director for Algorithmic Foundations)

In order that proposals from CDI PIs (See below for what the CDI grants are) get a fair and knowledgeable review it is important that we have experts available for all areas covered by CISE to serve on these panels. To make this happen it is important to have the CISE community volunteer in large numbers to serve as reviewers for CDI. There is a webform available at here for anyone who wants to volunteer to review CDI proposals.

Cyber-Enabled Discovery and Innovation (CDI) is NSF's bold five-year initiative to create revolutionary science and engineering research outcomes made possible by innovations and advances in computational thinking. Computational thinking is defined comprehensively to encompass computational concepts, methods, models, algorithms, and tools. Applied in challenging science and engineering research and education contexts, computational thinking promises a profound impact on the Nation's ability to generate and apply new knowledge. Collectively, CDI research outcomes are expected to produce paradigm shifts in our understanding of a wide range of science and engineering phenomena and socio-technical innovations that create new wealth and enhance the national quality of life.

Thursday, December 11, 2008

The Job Market

Bill's post Monday of a Michigan post-doc position garnished quite a few comments, generated by legitimate worries about the academic job market.

1992 was the single worst year so far in the CS faculty job market. Many of our students applied to 100-200 schools. Very few got good jobs and several others either left academics or took jobs at departments at levels well below their capabilities.

That bad job market discouraged many from getting Ph.D.s in computer science. But those who did graduated in the middle of the dot-com boom when CS jobs were relatively plentiful. The moral: Don't let today's job market determine whether you consider graduate school. By the time you finish your Ph.D. and a postdoc position, the recession should be well over and the first wave of theoretical computer scientists will start retiring.

That's little comfort for those searching for a faculty job this year. I expect this year's academic job market to be much worse than 1992, not just in theoretical computer science but in all academic disciplines. At Northwestern "hiring into new positions will be deferred as much as possible, while hiring into vacated positions will be carefully scrutinized to ensure that the prospective candidates present significant opportunities that might not be available in the future" and we are one of the healthiest financially. Harvard is freezing both hiring and faculty salaries. I'm hearing of many schools that plan to cut salaries.

Best suggestion is to ride it out. Take another postdoc or other temporary position. Consider jobs outside of North America. The disappearing slots will have to be filled once the economy rebounds and no one will hold it against you that you were unable to secure a US faculty job in 2009.

Wednesday, December 10, 2008

How was New York Theory Day?

REPORT AND REFLECT ON IBM/NYU/Columbia THEORY DAY

Most years in the FALL there is a THEORY DAY at NYU and in the SPRING there is a THEORY DAY at Columbia. They now call this IBM/NYU/Columbia Theory Day. The website did not say which numbered theory day, but the 2004 webpage for Theory day said it was the 25th one. If they have had these every semester since then... well, you can do the math.
  1. I was GOING TO post about content of the talks. But the abstracts of the talks do a better job, so I refer you to those. Should I have stayed home and just read the abstracts? NO. Seeing someone give a talk on something does provide extra insight.
  2. What does one get out of such things? It is good to know what is going on in theory, especially in parts of theory that you are NOT working on.
  3. Why is it good? I could say you might switch areas OR you might spot a connection to your area and make a contribution OR you might get a paper out of it. While all of these are true, just KNOWING stuff is important. One tangible aspect of that is, if you are on a Program Committee you will have a better idea of the papers not in your area. Also, it helps you follow talks at the next theory day, or some other venue where their are talks not in your area.
  4. I don't expect to follow that much of the talk. But I do expect (and this happened) to get introduced to an area and get some references that I may read later.
  5. Talks are an hour long. If it's a relatively new field (e.g., on-line Ad slot scheduleing) this is good. If it's a survey talk (Joe Mitchell's talk was) this is good. If it's on a classical field that you are not familiar with and they are proving the very latest results, then a full hour is a bit much. But this is just from my viewpoint of wanting to be introduced to a field. wanting to be introduced to a field.
  6. Most interesting thing I learned: Joe Mitchell, Comp Geom from SUNY Stonybrook, has had some of his work actually used by real people in the real world. (That may be a post later.) Since I am often skeptical of the practicality of theory, I was happy to hear this.
  7. Oddest thing I learned: NYU and Brooklyn Poly-Tech are merging in some fashion. Nobody quite knows the details or what this means yet. I'm hoping it means that theory day will one day be at Brooklyn Poly-Tech which will make my train ride to theory day 15 minutes shorter.
  8. New York Theory Day is usually announced about a month before it happens. This is not really enough time to plan. In fact, I have missed Columbia Theory day in the past because I had other plans by the time it was announced.

Tuesday, December 09, 2008

Ingo Wegener (1950-2008)

Philipp Woelfel remembers his advisor.

"Doktorvater" (doctor father) is the unofficial German term for PhD adviser – from my perspective, I can't think of a better word to describe my PhD adviser, Ingo Wegener, who passed away on November 26, 2008, after a long battle with cancer.

Probably many readers of this blog know about Ingo's work in complexity theory. Exemplary for his research are the topics of two of his monographs: The Complexity of Boolean Functions, 1987 (known to most of us simply as the Blue Book), and Branching Programs and Binary Decision Diagrams – Theory and Applications, 2000. Probably less known in this community is that about 10 years ago, Ingo began to investigate and analyze randomized search heuristics (evolutionary algorithms). Ingo and his research group strongly influenced the theory of this area; previously research was primarily experimental.

Ingo has been recognized as an outstanding scientist. He was a member of the "Wissenschaftsrat" (the most important scientific advisory committee of the German government), a member of the German Academy of Sciences Leopoldina, and the Academy of Sciences of Nordrhein-Westfalen. In 2006 he was awarded the Konrad-Zuse-Medal, the most prestigious German computer science award.

Here are some things I remember about Ingo (hopefully, some readers can add their recollections to the comments).

  • Ingo always wrote his scientific texts by hand. And Ingo was an incredibly efficient writer: when he was writing his recent textbook Complexity Theory: Exploring the Limits of Efficient Algorithms, the students who were LaTeXing the manuscript, as well as those who were proof-reading it, used to complain that Ingo was preparing the manuscripts too fast for them to keep up. (Ingo was on Sabbatical, though.)
  • Ingo was a gifted teacher. The students of the University Dortmund had an evaluation system, where each term they would elect the worst teacher, who would then be called "Lehrer Lempel" (sorry, probably only Germans understand the term). Ingo's lectures were known to be most difficult. Despite this, he usually came out "last" in the contest without any chance of ever winning the infamous "Lehrer Lempel" cup. In fact (according to Wikipedia) Ingo won the (real) teaching award of the University Dortmund twice.
  • One of my favorite quotes is this. Asked about the difficulty of exams, he replied by asking whether anyone, who couldn't distinguish a kidney from a liver should be allowed to graduate from medical school...
  • Ingo was one of the best organized people I have ever met. When I handed in my PhD thesis, Ingo said he'd be going to a workshop the next day, but he would write the report after returning. Sure enough, he came back from the workshop 5 days later, with the report on my thesis in his bag (and he wasn't a co-author of any of its papers).
We have lost an outstanding researcher, teacher, adviser, and friend, who for me was a true "Doktorvater".

Monday, December 08, 2008

A job posting for THEORY-Univ of Michigan

(Guest Post from Anna Gilbert from Univ of Michigan. Its actually a pointer to a job posting in theory!)

We have an opening for a postdoctoral fellow in Theoretical Computer Science at the University of Michigan. This is a joint position between the Department of Mathematics and the Department of Computer Science Engineering. Please apply by January 1, 2009 when we start reviewing applications.

Here is the URL for a more complete description. here

Friday, December 05, 2008

Science of Victory!

The Army has a new video The Science of Victory (via the CRA Blog).

An impressive argument for basic research though a bit heavy on the scare tactics.

Thursday, December 04, 2008

Proofs Still Not Dead

Helger Lipmaa points to Justin Beldin's essay on the epistemology of Interactive/Zero-Knowledge proofs and asks my opinion on the general topic.

Reminds me of the 1993 Scientific American article "The Death of Proof" by John Horgan. Horgan talked about experimental proofs and computer-assisted proofs as well as interactive proofs. Most notably there was a sidebar story titled "A Splendid Anachronism?" on Wiles' then recent proof of Fermat's last theorem. It's nice to report that fifteen years later the traditional mathematical proof is not dead yet.

Back to Helger's question. I have no problems with the probabilistic aspect of interactive proofs. With a proper source of randomness, one can make the error so small it gets washed out by other cosmic effects. I have no problems with the interaction in an interactive proof either. I learn all proofs interacting, either with a person or a paper.

But a proof is something to be savored and shared and interactive proofs prevent both. One can understand a traditional mathematical proof at many levels of details, the same way one can read Shakespeare either as a quick read or looking at it in depth to find new meanings. Interactive proofs require proofs are the purely logical level, sort of like explaining the Mona Lisa by giving a bit map of the image. And in the end the verifier only gets to learn that there is a Mona Lisa, possibly without any idea what it looks like.

Moreover when you hear a great proof you want to tell the world, much the way you want to share a good joke, something I've occasionally done on this blog (proofs and a bad joke now and then). But an interactive proof of tautology or a zero-knowledge proof of satisfiability one cannot share. You believe but you cannot convince others.

Nothing against the theory of interactive proofs and zero-knowledge. What we can do with them is quite surprising, the applications, particularly to approximation algorithms and cryptography, are quite amazing and I'm lucky to have played a role in their development. But the proofs that interactive proofs do what they do will always excite me much more than the proofs they generate.

Wednesday, December 03, 2008

Quantum Leap/Quantum of Solace/What does Quantum mean anyway?

When the term quantum is used in English it often means large as in the TV show Quantum Leap. The recent James Bond Movie Quantum of Solace seems to use the word quantum to mean small (also the name of a bad guy organization). What should it mean to better reflect what it is in Physics?
  1. Large: Quantum is discrete and hence moving in jumps is large compared to continuos things that move in very very small units (if units at all).
  2. Small: The jumps are really small!
  3. Neither: The point of Quantum is that things are discrete. This is neither large or small.
Some other random thoughts on this:
  1. Was the term quantum originally a word in English that is now used for science, or was it originally a word in science that is now used in English?
  2. The word random in English often means something slightly different- arbitrary.
  3. The word continuos as it is used in math bothers me. There are some functions that look non-continuos but by the formal definition they are. The term uniformily continuos takes care of this, but they should just rename things so math-continuos matches common-sense-continuos.
  4. Valiant defined a parsimonious reduction to be a reduction where the number of solutions is preserved (e.g., if SAT \le 3-COL via such a reduction f then if \phi has X sat assignments, then f(\phi) has X 3-colorings). I had never heard of the term parsimonious before I saw those reductions. It means thrifty but with a positive angle (as opposed to cheap). I don't quite see why thats really what you want to call these reductions.

Tuesday, December 02, 2008

How Many Sides to Your Error?

I often get asked various questions about complexity but I now had the same question asked twenty years apart.
Are there natural problems in BPP not known to be in RP∪co-RP?
Informally the class of BPP are the problems efficiently computable on a probabilistic computer with a very small chance of error. For RP if the program says "Yes" it is always right. For co-RP, if the program says "No" it is always right. The questions asks if there are any natural problems that have probabilistic algorithms that seem to require error on both the Yes and No answers.

In the late 80's, David Johnson asked me (and several others) the question when he was putting together his Catalog of Complexity Classes for the Handbook. I didn't have any examples and his catalog ended up citing the 1984 work of Bach, Miller and Shallit offering up Perfect Numbers (and some variants) an an answer to the question. But Bach et. al made their argument based on the fact that the current primality tests put Primes in co-RP but not in RP. In 1987, Adleman and Huang (building on Goldwasser and Kilian) put Primes in RP too. This puts Perfect Numbers in RP killing that example a few years before the Handbook appeared.

Since Primes were put in P in 2002, we have so few problems even known to be in BPP but not in P with the best remaining example the co-RP problem of determining whether two algebraic circuits are identical. So we still don't have any good answers to the above question except for some contrived problems like given three algebraic circuits, are exactly two of them identical. Most complexity theorists believe we have strong enough pseudorandom generators to avoid the randomness in probabilistic algorithms altogether. So in the end the question is likely irrelevant. Still we complexity theorists also like to know what we can prove beyond what we just believe.

All this doesn't mean BPP is not the right class to capture probabilistic algorithms when you can trust your randomness and don't mind a negligible error on either side. Complexity classes are defined to capture the properties we want them to have, not to mold to our limited knowledge of current problems.

Monday, December 01, 2008

Cellprobe complexity- Yao model still relevant?

Some sad news, complexity theorist Ingo Wegener passed away last Wednesday at the age of 57. We will have a proper obituary soon.

My last post LINK asked a question about Yao's paper. MIP said (essentially) that since there are more realisitic models of data structures for membership, and very good (better than binary search) algorithms, by now, there is nothing interesting in Yao's paper for computer science, just for people passionate about Ramsey Theory.

That raises a few questions: When is a problem no longer interesting?
  1. When it stops having practical applications. If we take this answer then large parts of math and even TCS have stopped being interesting. (Maybe this is true.)
  2. When it stops being relevant to the problem it was intended to solve. This fit Yao's paper. But other branches of math are far removed from the problem they were first deveoloped to solve. For one example, computability theory seems far removed from questions in the foundations of math. But is it still interesting?
  3. However, I still want to know the answer to my question. Its mostly been answered, but not completely. (The paper IMPLICITY O(1) PROBE SERACH by Fiat and Naor, SICOMP Vol 22, 1993 has better bounds for some cases, but not quite for the one I had in mind.)
  4. So, only of interest to people passionate about Ramsey Theory. Actually, I doubt Ronald Graham cares about the problem. Might be more correct to say people passionate about applications of Ramsey Theory. Are there such people? Are there people who have websites dedicated to applications of Ramsey Theory? Are there papers on Ramsey Theory Applications? Perhaps there are, and to the authors the question may still be of interest.

Wednesday, November 26, 2008

Blatant Plugs

We are looking for a postdoc to join the new CS Theory and Economics groups at Northwestern. We'll consider any theory candidates though prefer candidates with research connecting both areas. We're also looking to bring in several graduate students from all areas of theory. Apply here.

Over at TTI-Chicago (where I still am an adjunct professor), we are planning to hire a couple of research assistant professors (3-year non-tenure-track) and possibly a tenure-track position as well in theoretical computer science.

This blog would be remiss in not mentioning the postdoc opportunities for complexity theorists at the new Center for Computational Intractability based in Princeton. But why be in New Jersey when you can live in the land of Obama?

The call for papers for both the 2009 Conference on Computational Complexity and the Conference on Electronic Commerce are up. Both have deadlines slightly after the STOC notification date in early February.

Midwest Theory Day will be held Saturday December 6 at Northwestern, first time at Northwestern since 1993.

And while I'm plugging, don't forget to send your complexity journal submissions to the ACM Transactions on Computation Theory.

Bill and I are off for Thanksgiving. Have a great holiday and we'll see you on Monday.

Tuesday, November 25, 2008

The Flame

I deposited a check today at a new Chase ATM. You just put the check directly into the machine, the check gets scanned and the amount verified. Saves me a small amount of time but more importantly nobody at Chase ever has to look at the check. The computer has completely taken over.

On this theme we just rented Wall·E, another Powerful-Aware-Evil computer movie, though at least this time the main protagonists are also machines. The movie scared my daughter when she saw it last summer and I had to give her "you need to be in control of technology instead of having technology control you" talk.

In the movie, Wall·E uses a Zippo-style lighter and creates a flame. When I took a computer graphics class back in 1984 the professor presented a slide show of the state-of-the-art in computer graphics. The last shot was a photo of a flame because a good computer flame was unobtainable at the time.

One generation's "holy grail" is a new generation's "take it for granted".

Monday, November 24, 2008

Should tables be sorted? Yes but--- a question

In Yao's paper Should tables be sorted he did the following (I am paraphrasing alot).

A (U,n;q)-WPDS (Word Probe Data Structure) for MEM is the following;
  1. A function that does the following: Given A\substeq U, |A|=n, return the elements of A in some order. Think of them as being in CELL[1],...,CELL[n].
  2. (Assume Step 1 has been done so there are elements in CELL[1],...,CELL[n].) Given u\in U we want to know if u\in A. We have a query algorithm that asks q adpative queries of the form What is in CELL[i]?, and then outputs YES or NO. If YES then u\in A, if NO then u\notin A.
Thoughts, results, and a question I really want someone to answer.
  1. I call it a WBDS to distinguish from the bit probe model.
  2. The obvious appraoch would be to SORT the elements of A and use binary search. Hence q=log(n+1).
  3. For some cases where U is small compared to n, there are constant-probe algorithms.
  4. KEY: Yao showed that if U is large compared to n then sorting IS the best you can do. How big? Let R(a,b,c) be the RAMSEY number associated to a-uniform hypergraphs, b-sized monochromatic sets, and c colors. Yao showed that if U\ge R(n,2n-1,n!) then sorting is optimal.
  5. QUESTION: Is a lower value known? I have looked in various places but couldn't find a lower value. However, this may be a case where I'm better off asking an expert. I hope one is reading this blog!.
  6. This is one of the first applications of Ramsey Theory to computer science. It may be the first, depending on how you defined application, Ramsey Theory, and computer science.

Friday, November 21, 2008

What is your best paper? Ambigous!

I sometimes ask people
What is your best paper?
This question is actually ambigous. Here are criteria that you can use.
  1. Most important to the public. E.g., a big breakthrough people care about.
  2. Most important in your view E.g., a big breakthrough on a problem that you care about, but perhaps others do not.
  3. Most citations. This might be the same as important to public but may take more time to be evident as such. This one is verifiable!
  4. You are personally attatched to it ind. of what the public things. E.g., you are delighted that it used theorms in p-adic cohomology to solve a problem in number theory, but nobody else cares.
  5. You had a big contribution to it.
  6. You liked how it evolved.
Hence, if someone asks what your best paper is, ask them to clarify the question. Or give them 5 answers. ~

Thursday, November 20, 2008

Is the Medium the Message?

Do you care about fonts?

In other words do you care about how your paper looks? Not the details of the proofs or the quality of the exposition but how the paper looks. The fonts, the margins, the spacing, the neatness of it all.

I used to care.

I was in graduate school when the great shift to LaTeX happened. All of a sudden our papers looked great, like finished journal versions right off the printer. I would spend hours on my papers and months on my thesis, making sure there were no overfull boxes, that equations lined up nicely, pagebreaks occurred at good places and hyphenations were done right. Didn't worry about fonts back in those days when we thought Computer Modern looked good.

Now I don't bother. I still fix the big ugly problems but who really cares if "nondeterministic" is hyphenated properly. As you can see from this beautiful green blog, I don't try that hard on the medium. Though the style of Bill's posts can sometimes make me look like a true artist.

Truth is you get little value in our field from looking good. So better to spend the time proving new theorems than putting the old ones in a pretty font.

Wednesday, November 19, 2008

Reflection on the old days- by Joseph Kruskal

Guest Blog by Joe Kruskal. He is the Kruskal that did the MST algorithm, the Kruskal Tree Theorem and work on multidimensional scaling. However, this post is not on any of those topics. Its a response and reflection on my post about the monotone subequence theorem.

HIS POST:

In your post on the monotonic sequence theorem you said the following.
In those days it was harder to find out if someone else had done what you had done since they didn't have google, but it may have (in some cases) been easier since there were so many fewer researchers- you could just call them. OH- but long distance was expensive back then.
Yes, long-distance phone calls were expensive then. That's why mathematicians seldom used phone calls for that purpose. They used mail -- postal mail, of course. Now that email has become almost universal, and is seen as slow and stodgy compared with text messaging and other modes of communication that I haven't kept up with, people have no real idea what communication was like 50 years before.

The same thing was true 50 years ago. We didn't know then how communication was done 50 years before that. In England, at least, it was quite common for a well-to-do person to send a letter to a friend to propose having dinner out together, or going to a play together, or lots of other possibilities. They would expect to get a reply within say 4 hours, time enough to send another message confirming the arrangement for that evening.

In London at least, there were 4 deliveries/pickups per day, at least for the upper classes. When my wife and I visited England in the 1950s and stayed with my sister who had moved there with her British husband, we personally observed the following, which we had been told about. When a post office mail person come to the red "post box", which displayed the pick up times, he stood there waiting until the specified pick up time, to the minute (by his watch). Only then did he open the box and take out the letters and post cards. Everyone relied on the displayed pickup times, and would hurry to the box just in time, knowing that if they got there by the posted time the mail would go out right away. Watches were not so accurate then, so I imagine that the post office pick up people checked their watches against Big Ben or other large public clocks.

My own dissertation also indicates how things had changed:

Paul Erdos was telling lots of people about a conjecture due to a Hungarian mathematician, Vazsonyi, he was friendly with who he said "had died", meaning that he left mathematics for a well-paying job with some company -- I think it was an airplane manufacturer. I was one of many people who heard him describe this conjecture. Roughly a year later, I had put a lot of work into this problem, but was still not close to a solution. By chance I bumped into Erdos at the Princeton Junction station. We chatted. I don't know how the conversation turned to the Vazsonyi conjecture -- probably I told him I had been working on it. He said, oh, you must read a recent paper by Rado (a British mathematician, also from Hungary). I quickly went to the library and found his paper, which I read with fear and trembling. Had I been scooped? It turned out that he had made significant progress, but hadn't cracked the nut. His work combined with mine finally led to a solution.

Today, the equivalent of those two chance conversations can happen via Google and email. I feel certain that science of many kinds is developing much more rapidly than it used to for this reason (except, perhaps, in fields where progress is kept secret for reasons of financial gain).

Tuesday, November 18, 2008

Secrets from the Future

One of my crypto students pointed me to the "Nerdcore Hip-Hop" group MC Frontalot. Here's a trailer for their tribute film, Nerdcore Rising.

You can download their song Secrets from the Future that makes a good point about the life of crypto. From the (slightly explicit) lyrics

Best of all, your secret: nothing extant could extract it.
By 2025 a children's Speak & Spell could crack it.

You can't hide secrets from the future with math.
You can try, but I bet that in the future they laugh
at the half-a**ed schemes and algorithms amassed
to enforce cryptographs in the past.

I don't expect a general way to break RSA or factor numbers either on classical machines (for lack of algorithms) or quantum machines (for lack of controlled entanglement) in the next couple of decades. Nevertheless you can't count on say a 1024 or 2048 bit RSA key being safe in a decade or two. Better algorithms combined with faster highly parallelized machines may break those codes. Or maybe they won't—but you can't be sure.

Even the NSA gives expiration dates on encrypted data. If you used 100,000 bit keys your secrets should survive into the next century. But you have to wonder—how dark are your secrets that you need them to last?

Monday, November 17, 2008

A "well known" theorem

In the excellent paper On the power of two, three, and four probes
It is well known that every graph with s vertices and at least 2s edges contains a cycle of length at most 2log s
My students puzzled over this one in two ways. (1) How to prove it? Two of them came up with correct proofs that were not that hard. (2) Is it really well known? Two of them searched the web for a proof. They could not find one.

The problem with the phrase It is well known that is that it may be well known to people who know it (duh) but not to others. People not in the know don't even know if its hard or not (its not). Perhaps they should have said It is easy to show that. Or give a hint to the proof.

I invite my readers to give proofs to see if they differ from my students, and also so that the next time someone needs to reference this, they can point to this blog and attibute it to some cute alias.

A student asked me if giving as a reference a blog site or an unpublished paper on line is legit. I would say its more legit then giving as a reference a paper that is not on-line. A paper that is refereed but not online had a few people look at it closely. A paper that is unrefereed but online might have a lot more people look at it (then again, it might not). But since the reader can access it, he or she might be able to tell for himself or herself whether the statement they need has been proven properly.

Friday, November 14, 2008

Monotone Sequence Theorem: literature thing

Sometimes the literature gives a proof, and then a reference, but the reference is not really to that proof. Here is an example. The theorem in question is the following:
For any seq of n2+1 distinct reals there is either a dec subseq of length n or an inc subseq of length n+1.
In Proofs from the book the authors attribute the following argument to Erdos-Szekeres (paraphrased):
Let the seq be a(1),...,a(n2+1). Associate to each a(i) the number t(i) which is the length of a longest inc subseq starting at a(i). If there is an i such that t(i) &ge n+1 then we are done. If not then the function mapping a(i) to t(i) maps a set of size n2+1 to a set of size n. Hence there is some s such that n+1 elements of the seq map to s. These n+1 elements form a dec seq.
The actual proof in the Erdos-Szekeres paper is this (paraphased):
Let f(n) be the minimum number of points out of which we can select n inc or dec subseq. We show f(n+1) = f(n)+2n-1. Let a(1),...,a(f(n)+2n-1) be a sequence of distinct reals. Out of the first f(n) of them there is an inc or dec subseq of length n. Call the last elements of that subseq b(1). Remove it. (There is now a seq of length f(n)+2n-2.) Repeat the process to obtain b(1), b(2), ..., b(2n). Let A be the b(i)'s that are from inc subseq. Let B be the b(i)'s that are from dec subseq. It is easy to show that A is itself a dec subseq and that B is itself an inc subseq. If A or B has n+1 elements then we are done. The only case left is that A and B each have n elements. Let a be the last element in A and b be the last element in b. It is easy to show that a=b. But one of them was removed before the other, so this is impossible.
Thoughts:
  1. I suspect that in talks Erdos gave he did the proof now atributed to Erdos-Szekeres and hence people naturally assumed that this is the paper it was in.
  2. I am surprised that Proofs from the book gave this proof. There is, IMHO, a better proof. I do not know whose it is.
    Let the seq be a(1),...,a(n2+1). Map every 1 \le i \le n2+1 to (x,y) where x (y) is the length of the longest inc (dec) subseq than ends with a(i). It is easy to show that this map is 1-1. If all of the x,y that are mapped to are in [n]x[n] then the domain is bigger than the range, contradiction.
  3. Erdos-Szekeres first proved this theorem in 1935. Martin and Joseph Kruskal proved it 15 years later without knowing that Erdos-Szekeres had proven it; though by the time Joesph Kruskal published it he knew. I have gathered up 5 proofs of the theorem that I know here.
  4. In those days it was harder to find out if someone else had done what you had done since they didn't have google, but it may have (in some cases) been easier since there were so many fewer researchers- you could just call them. OH- but long distance was expensive back then.

Thursday, November 13, 2008

My New Hybrid

My younger daughter was deeply moved by An Inconvenient Truth and has become a staunch environmentalist ever since. She pushed me to change our lightbulbs as well as get a hybrid car as my next car. I usually don't let my kids influence our major purchase decisions but it's hard to argue with them when they're right.

So I got a Toyota Camry Hybrid about two weeks ago. When I ordered the car back in September I calculated I would make up the extra cost in about four years, but that was back when gas was $4/gallon and before Toyota slashed prices on their non-hybrid Camrys. I was told I had a 4-6 month wait but I picked up one sooner that someone else backed off of, probably because of the economy.

I certainly get good gas mileage—I've driven 250 miles and still have half of my original tank. But I do notice several features that simply exist to make me feel good about getting a hybrid. Most cars have tank sizes so the car goes about 300 miles on a tank. They kept the large tank size in this car so I can better feel the gas mileage. There is an MPG dial next to the speedometer, a fancy display showing arrows as energy gets transferred between the gas tank, engine and batteries and when I turn the car off it gives me an "Excellent" when I had good gas mileage. I wonder what I do wrong when I don't get the Excellent.

The neatest feature has nothing to do with being a hybrid. I never have to take the keys out of my pocket instead the car just senses them. The doors and the trunk just automatically unlock when I open them and I just press a key to start the car. All keys should work this way.

What's missing in cars these days is active Internet connectivity. I can easily think of many uses: Updates to maps, traffic, weather, local information, music streaming and VOIP. I'm sure there are many more possibilities people will come up with once a system is in place.

Oddly enough this will probably be my last hybrid, my next car will run solely on batteries or some other technology. That's life in the fast lane.

Wednesday, November 12, 2008

The Kakeya Conjecture over finite fields resolved! Why can't we resolve P vs NP?

Recently the Kakeya Conjecture over finite fields (henceforth Kakeya) was resolved. For information on what Kakeya is and how it was solved see Dvir's paper On the size of Kakey Sets over Finite Fields that solved it or a nice entry on Terry Tao's blog. Some Applications of the techniques are in the paper by Dvir and Wigderson Kakeya Sets, new mergers and old extractors.

Fields Medal Winner Terry Tao and others worked on Kakeya. There were partial results with difficult proofs. But the final proof was easy to understand. This does NOT mean it was easy to generate- we all suspect verifying is easier than generation. How easy was the proof to understand? So easy that I saw and understood a talk on it in seminar and was able to reconstruct it while proctoring an exam.

Whenever I see a math problem solved in a way that is easy but had been missed I wonder: is there an easy solution to P vs NP that is eluding us?
  1. Kakeya did not have a community of people working on it. Even though the people working on it were brilliant there were not that many of them. To solve a math problem may require a diversity of viewpoints. P vs NP has alot of people interested in it. Are they working on it? Do they have a diversity of viewpoints? I honestly don't know.
  2. There had been partial results on Kakeya. Hence there was hope. At this time there has been very little progress on P vs NP. (I would welcome counterarguments to this.)
  3. There were no results saying such-and-such technique cannot be used to solve Kakeya. For P vs NP there are several such results: oracles, natural proofs, algebraization. (hmmm- is having three results really having several results?). Is P vs NP the only currently open problem in math where there has been considerable effort in showing what techniques won't work? There may be some others in set theory where the conclusion Ind of ZFC is a real option, but how about in fields of math where we expect to get a real answer?
  4. If P=NP (which I doubt) then a simple-to-follow-but-quite-clever proof that eluded us all is plausible. If P\ne NP then I doubt this would happen.

Tuesday, November 11, 2008

Barrington's Theorem

A few weeks ago Bill complained about how hard it is to get an electronic journal version of Barrington's theorem. Though Barrington's theorem was brand spanking new and exciting when I started graduate school in 1985 and it remains one of my favorite theorems, it doesn't often get taught in many graduate complexity courses these days. So let me talk about one of the great complexity results that, in some sense, lets you count in constant space.

Formally the theorem states that polynomial-size bounded-width branching programs have the same computation power as Boolean formulas. A branching program is a directed acyclic graph where all but two nodes are labelled by a variable name and has out-degree two: one edge labelled true and the other false. The other two nodes are the terminal nodes labelled accept and reject. There is a single root of in-degree zero. From the root, one follows a path following the true node if the labelled input variable is true and false otherwise and accepting or rejecting the input based on the terminal node reached. Layer the branching program based on the distance of each node from the root and the width is the maximum number of nodes in any layer.

Barrington showed that width-five branching program can simulate any Boolean formula and thus the complexity class NC1 which includes the majority function. But the branching program in any layer can only remember one of five nodes, less than three bits of information as it processes through the graph. Yet you still can tell if a majority of input variables are true. Amazing.

The key to Barrington's proof is that the group S5 (permutations on five elements) is non-solvable i.e., there are permuatations σ = (12345) and τ = (13542) such that στσ-1τ-1 is not the identity permutation. There is a simple write-up of the proof on page 60-61 of the Boppana-Sipser survey.

Some neat consequences of Barrington's theorem.

  • Regular languages are NC1-complete under projections of the inputs.
  • One can compute any language in PSPACE using not much more than a clock and constant bits of memory. (Cai and Furst)

Monday, November 10, 2008

STOC reminder- TODAY!/ NYU Theory Day

TWO ITEMS:

I) Reminder: STOC submission abstracts due today.




II)

                       New York Area Theory Day
                    Organized by:  NYU/IBM/Columbia
                    External sponsorship by: Google
                       Friday, December 5, 2008

The Theory Day will be held at
Courant Institute of Mathematical Sciences,
New York University, 251 Mercer Street,
Auditorium 109, New York.

                                Program

9:30   - 10:00    Coffee and bagels

10:00  - 10:55    Prof. Assaf Naor
                 Approximate Kernel Clustering

10:55  - 11:05    Short break

11:05  - 12:00    Prof. Joe Mitchell
                 Approx Algs for some Geometric Coverage
                 and Connectivity Problems

12:00  -  2:00    Lunch break

 2:00  -  2:55    Dr. Jonathan Feldman
                 A Truthful Mechanism for
                 Offline Ad Slot Scheduling

 2:55  -  3:15    Coffee break

 3:15  -  4:10    Prof. Yishay Mansour
                 TBA

For directions, please see
here
and
here
(building 46)

To subscribe to our mailing list,
follow instructions at
here

Organizers:
Yevgeniy Dodis   dodis@cs.nyu.edu
Tal Rabin        talr@watson.ibm.com
Baruch Schieber  sbar@watson.ibm.com
Rocco Servedio  rocco@cs.columbia.edu

=======================================================================

Prof. Assaf Naor
(New York University)

Approximate Kernel Clustering

In the kernel clustering problem we are
given a large n times n positive semi-definite
matrix A=(a_{ij}) with \sum_{i,j=1}^n a_{ij}=0
and a small k times k positive semi-definite
matrix B=(b_{ij}). The
goal is to find a partition S_1,...,S_k of {1,...n}
which maximizes the quantity

 \sum_{i,j=1}^k (\sum_{(p,q)\in S_i\times S_j} a_{pq}) b_{ij}.

We study the computational complexity of
this generic clustering problem which
originates in the theory of machine learning.
We design a constant factor polynomial time
approximation algorithm for this problem,
answering a question posed by Song, Smola,
Gretton and Borgwardt. In some cases we
manage to compute the sharp approximation
threshold for this problem assuming the
Unique Games Conjecture (UGC). In particular,
when B is the 3 times 3 identity matrix
the UGC hardness threshold of this problem
is exactly 16*pi/27. We present and
study a geometric conjecture of
independent interest which we show
would imply that the UGC threshold when
B is the k times k identity matrix is
(8*pi/9)*(1-1/k) for every k >= 3.

Joint work with Subhash Khot.

-------------------------------------------------------------------------

Prof. Joe Mitchell
(Stony Brook University)

Approx Algs for some Geometric
Coverage and Connectivity Problems
We examine a variety of geometric
optimization problems.  We describe
some recent progress in improved
approximations algorithms for several
of these problems, including the
TSP with neighborhoods, relay
placement in sensor networks,
and visibility/sensor coverage.
We highlight many open problems.

-------------------------------------------------------------------------

Dr. Jonathan Feldman
(Google)

A Truthful Mechanism for
Offline Ad Slot Scheduling

Targeted advertising on web pages
is an increasingly important
advertising medium, attracting
large numbers of advertisers and users.
One popular method for assigning ads
to various slots on a page (for
example the slots along side
web search results) is via a real-time
auction among advertisers who have
placed standing bids for clicks.
These "position auctions" have been
studied from a game-theoretic
point of view and are now well
understood as a single-shot game.
However, since web pages are visited
repeatedly over time, there are
global phenomena at play such as
supply estimates and budget
constraints that are not modeled
by this analysis.

We formulate the
"Offline Ad Slot Scheduling" problem,
where advertisers are scheduled
beforehand to slots on a web page for
portions of the day.  Advertisers
specify a daily budget constraint,
as well as a per-click bid, and may
not be assigned to more than one
slot on the page during any given
time period.  We give a scheduling
algorithm and a pricing method that
amount to a truthful mechanism
under the utility model where bidders
try to maximize their clicks,
subject to their personal constraints.
In addition, we show that the
revenue-maximizing schedule is not
truthful, but has a Nash
equilibrium whose outcome is identical
to our mechanism.  Our
mechanism employs a descending-price
auction that maintains a solution
to a machine scheduling problem whose
job lengths depend on the price.

Joint work with
Muthu Muthukrishnan,
Eddie Nikolova and
Martin Pal.

-------------------------------------------------------------------------

Prof. Yishay Mansour
(Google and Tel Aviv University)

TBA

-------------------------------------------------------------------------




Friday, November 07, 2008

A new logical fallacy

Those of us who have taught logic to students are familiar with some of the fallacies they make: (1) confusing OR with XOR (reasonable given English Lang use), (2) thinking that (a--> b) --> (b-->a) and (3) thinking that from

A1 AND A2 AND A3 --> B

you deduce something about the writer's opinion of A3.

This recently happened, though it wasn't a student. It was a commenter on this blog. In a recent blog I wrote about McCain's concession speech:
If he has that way the entire time he might have won (if he also didn't pick Palin and we didn't have the economic crisis and we were more clearly winning the Iraq War).
This is an IF-THEN statement. There is no logical way to deduce what I think of any of the premises. One of the commenters committed error (3) above:
A country is destroyed and half million people are killed, and yet the only thing you feel regret about is not "more clearly winning". Excuse me, Professor Gasarch, I never held hope for the humanity of US, but a comment like this from an intellectual in this country, just taught me how coldblooed the americans can be."
The commenter raised an interesting question: If a writer says A-->B then what can you deduce about the writers opinion of A?
  1. If the work is in Large Cardinals then likely the writer thinks that the Large Cardinal hypothesis is true. Note that we do not know this logically.
  2. In papers that prove things contingent on P\ne NP or that factoring is hard or the usual derand assumptions the authors thing the assumption is true. Note that we know this by sociology, not by logic.
  3. (I may be off on this one- if so please correct.) Non-Euclidean Geometry was started by assuming the Parallel Post was false, hoping to prove that that assumption was FALSE, and seeing what can be derived from it. Let A be For a line L and a point p not on that line there are an infinite number of lines through p that do not intersect L. Let B be The sum of the angles of a triangle are LESS THAN pi. When someone proved A implies B they may have thought that A was false.
  4. Pat Buchanan said (I am paraphrasing)
    If McCain had presented more ties linking Obama to Ayers and Wright then he would have won.
    While this could be a simple IF-THEn statement, given who he is we know that he things these ties are relevant. Keith Oberman may have expressed a similar sentiment differently:
    If McCain had presented more lies linking Obama to Ayers and Wright then he would have won.
    In this case we can tell what Keith Oberman thinks of the assumption.
  5. SO- if someone says A-->B then you can't really deduce what the speaker thinks of A LOGICALLY, but you can use other things he has said and his reputation to discern what he thinks of A. Reasoining from context and personality can be useful, though it is not as rigorous as we are used to. Students in a logic course should not use it.

Thursday, November 06, 2008

The Terminal Man

Michael Crichton passed away yesterday. His early novels and movies showing the potential dark sides of technology had a strong impact on me in my youth.

The Andromeda Strain deals with an extraterrestrial virus from a military satellite. The movie Westworld, written and directed by Crichton, is about a fantasy land where a gun-slinging robot, played by Yul Brynner, doesn't behave as he should.

The Terminal Man doesn't refer to someone about to die but direct connections between humans and computers. If I remember right people became addicted to stimulating themselves with these connections. Addicted to the network? You have to be kidding.

My favorite Crichton book is The Great Train Robbery about the meticulous planning and execution of a massive gold heist on a train in 1855. Not much technology but a very logical plot line. The movie, also written and directed by Crichton, not suprisingly follows the book quite closely.

I haven't enjoyed his later work as much. The mathematician Ian Malcomb in Jurassic Park comes off as a babbling philosophical know-it-all who happens to be always right. The ridiculous holographic database in Disclosure is just embarrassing. These later books often have gratuitous action scenes just so they might make better movies.

Nevertheless Crichton knew how to make technology very creepy. Even if these books were not quite that realistic they got the young me excited (and worried) about the power of technology and computers.

Wednesday, November 05, 2008

Random thoughts about the election

Random comments on the election.
  1. John McCain gave a nice concession speech and was excellent on SNL on Saturday. If he was that way the entire time he might have won (if he also didn't pick Palin and we didn't have the economic crisis and we were more clearly winning the Iraq War). However, I've heard that line before- Kerry, Gore, Dole also looked alot better after they lost. Are candidates overly managed and overly cautious? Does this work against them? (Obama seems to have escaped that.)
  2. Whenever a party loses it splits into factions: (1) We lost because we strayed from our true principles. They can split further on what the true principles are. Some of these may support Palin in 2012. (2) We lost because we are too isolated from the mainstream. Some of these may blame Palin for the defeat.
  3. Could any Republican have won the Presidency year? With an unpopular war and the economic crisis they would have needed a weak oponent and some distance from George Bush (Mitt Romney 2006 might have worked). Obama was strong competition and John McCain was seen (fairly or unfairly) as being just like W.
  4. Hillary-supporters had basically two arguments in the primaries: (1) Hillary has a better chance of beating McCain then Obama does. This has proven false. Or has it? better chance- how can that be quantified once the event has happened. (2) Hillary would make a better president then Obama. We will never know.
  5. Obama ran an excellent campaign. McCain ran a terrible campaign. McCain had current events against him (as Colbert has said the truth has a liberal bias). Here is a silly statement: Obama's victory is due 30% to his campaign, 40% to McCain's campaign, and 30% to reality. I just made up those numbers, but can that question be asked and answered?
  6. This is why I study math. Math has well defined questions that (for the most part) have answers (though we may not know them yet). But the question why did Obama win? can't really be asked rigorouly or answered definitively. One can even criticize how I phrased it- should I have asked why did McCain lose??

Tuesday, November 04, 2008

Election Day

I made a snapshot of the electoral markets map at 5 PM Central time Monday to compare to the actual votes. Meanwhile you can continue to watch the markets aggregate rumors and results in real time at electoralmarkets.com.

I started a couple of different posts for today but nothing seemed to fit. This is an exciting day for America and possibly the world. If I was young and foolish I'd head down to Grant Park tonight for the big rally (as some of my younger colleagues will do) but now just plan to be at home sharing the experience with my family. Go vote if you can and let's watch history get made!

Monday, November 03, 2008

Will fivethirdyeight.com use this poll?

I had my Graduate Class in Communication Complexity VOTE FOR PREZ in a secret ballot. The rules were: They could vote for anyone on the ballot in Maryland, (I supplied a ballot with all of the candidates in Maryland) OR write someone in OR write NONE OF THE ABOVE. Here are the candidates and how many votes they got.
  1. Obama-Biden, Democrat party: 11 votes
  2. McCain-Palin, Republican: 1 vote
  3. McKinney-Clemente, Green: 1 vote
  4. Barr-Root, Libertarian: 0 vote
  5. Nader-Gonzalez, Independent: 1 vote
  6. Baldwin-Castle, Constitution: 0 votes
  7. None of the Above: 1 vote.
Speculation:
  1. Usually the Republican does better than this, and McCain is one of those Republicans who (perhaps used to) appeal to moderates and even democrats. So why only one vote? (1) ``I used to like McCain before he turned rightward this campaign'', (2) My class actually likes Obama. In the past there were people who couldn't stand Kerry or Gore, so while they didn't like Bush, they voted for him anyway.
  2. Usually I get one or two Libertarian voters. Why note this year? Speculation: (1) They think of Barr are really being a republican, (2) They think Obama really does transcent party and ideology. (3) The libertarians are tired of having their votes not count. (4) They got confused by the butterfly ballot that I used.
  3. Our student body is somewhat liberal. Or maybe its just that the students at Univ of MD interested in applying Ramsey Theory to Communiation Complexity are somewhat liberal.
  4. One of my colleagues was amazed that McCain got one vote.
  5. Of the 15 people in my class, around 10 are from other countries.
  6. This could be bad news for Mccain. There is that classic saying: as goes Graduate Communication Complexity, so goes the nation.
  7. Is it foolish to look at a small poll from an obviously biased source? Perhaps yes, but compare that to the networks yammering on enlessly about small shifts here and there.

Friday, October 31, 2008

Focs Wrap Up

(Guest post from Rahul Santhanam)

As promised, I did do a bit of experimenting this FOCS in terms of attending talks from other areas. Here's an encapsulation of the results:

Cryptography - Yevgeny Vahlis spoke about his negative result with Boneh, Papakonstantinou, Rackoff and Waters on black-box constructions of identity-based cryptosystems (cryptosystems in which an arbitrary string can be used as a public key) from trapdoor permutations. ID-based cryptosystems have been extensively studied recently in work of Dan Boneh and others. Security of known constructions is proved in the random oracle model, and it would be interesting to base security on the same assumptions as those used in standard public-key cryptography. This result indicates this might be too much to hope for, at least using standard techniques.

Learning - Adam Klivans gave an excellent talk on "Learning Geometric Concepts via Gaussian Surface Area" (with O'Donnell and Servedio). There's been a lot of interesting work recently on learning in the presence of random or adversarial noise. It's known that functions with low noise sensitivity, or even with low Gaussian noise sensitivity, can be learned efficiently in this setting, but these quantities are hard to bound in general. The current work resolves some open problems about learnability of geometric concept classes by using some deep mathematical results relating Gaussian noise sensitivity to Gaussian surface area, and working with the latter more tractable quantity.

Games with Entangled Provers - I went to a couple of talks on another currently hot topic: the power of two-prover games with entangled provers. A semi-definite programming formulation due to Tsirelson can be used to show that the optimal value can be computed efficiently for a very simple form of 2-prover games, known as XOR games. The first talk that I attended was given by Thomas Vidick on "Entangled Games are Hard to Approximate" (with Kempe and Kobayashi and Matsumoto and Toner) - he showed a reduction from the problem of approximating the value of 2-prover games for classical provers (known to be NP-complete) to the problem of approximating the value of 3-prover games for entangled provers. A complementary talk by Ben Toner on "Unique Games with Entangled Provers are Easy" (with Kempe and Regev), showed that for unique games, the optimal value can actually be approximated efficiently using "quantum rounding" to a semi-definite program. The most interesting open problem in this area seems to be to derive some sort of upper bound on the complexity of approximating the value for general games.

Algorithms - I heard Dimitris Achlioptas speak on "Algorithmic Barriers from Phase Transitions" (with my colleague Amin Coja-Oghlan). This paper tries to understand the reason why simple algorithms for constraint satisfaction problems on random instances fail in the region around the threshold, by showing that this "region of failure" coincides with the region where the structure of the solution space changes from being connected to being (essentially) an error-correcting code. Rather intriguing that a computational fact, the success or failure of standard algorithms, is so closely related to a combinatorial artifact. Finally, I went to Grant Schoenebeck's talk on proving lower bounds for the Lasserre hierarchy of semi-definite programming formulations of CSP problems, which is interesting because a number of different algorithmic techniques including the Arora-Rao-Vazirani formulation of Sparsest-Cut can be implemented in the lower levels of the hierarchy. Grant's result uses a connection to the width complexity of resolution, which I found very surprising, but then my understanding of this area is rather naive...

All interesting stuff, but I'd actually prefer FOCS to be even broader, and representative of all areas of theory. If that requires multiple sessions, so be it. Currently, FOCS seems to be an algorithms, complexity and combinatorics conference with papers from other areas filtering in unpredictably depending on the composition of the committee. It's very creditable that the standard of papers has remained consistently high (or even improved) over the years. But with several major sub-areas (semantics and verification, concurrency theory, database theory, theory of distributed computing, computational geometry, computational biology etc.) represented barely or not at all, it's quite hard to still make the case that FOCS and STOC are absolutely central to theoretical computer science as a whole. I guess the question is how much we value FOCS and STOC being core conferences?

Thursday, October 30, 2008

Election Special

A hodgepodge of election stuff, in my last post before November 4.

The markets and polls both suggest that the election is not that close. There's no question who will win in my (and Obama's) home state. The senate race here is even less interesting with the Republicans not even bothering to mount a serious campaign against the incumbent and Majority Whip, Dick Durbin. At least the house race in my district is tight with the incumbent, Mark Kirk, a moderate republican running against Dan Seals, an up and coming African-American. Sounds familiar.

Before voting, check out the answers Obama and McCain both gave to Science Debate 2008. Keep in mind that science will likely get a back seat given the tight budget constaints that we will have in the next several years. McCain's promise to freeze most federal spending will be particularly bad for science.

In the category of "Good Searching means Less Privacy" comes the Contributions Search Engine on the New York Times site. Type in a relatively unique name like "Fortnow" and you'll discover my $255 donation to the Obama campaign. In full disclosure, I also donated $100 to Obama in the primaries.

So why $255? Did I really mean to donate the maximum I could in one unsigned byte? Actually $255 = $1000/4 + $5, with the $5 coming from one of my daughters who wanted to add her own contribution. I still live in a base-ten world.

Wednesday, October 29, 2008

Math NAMES

(A quick note: Howard Karloff is organizing an excursion to the South Pacific during SODA. Sign up by November 3.)

And now for todays post:

What do the names Pascal, Hilbert and Mittag-Leffler have in common, aside from all being names of math folks?

I have AN answer in mind. You may very well give me other ones. I'll be curious if anyone give me the one I had in mind.

Tuesday, October 28, 2008

The Gossip Center

Rahul Santhanam continues to spread the word from FOCS

Conferences are not really about the talks, they're about socializing and gossip. Who's having an affair with whom? Who's been drinking too much? It's for answers to questions like these for which we brave jet lag, find others to teach classes for us, endure conference food&hellip.

Well, not quite. Poets and artists might gossip about such things, but scientists are on a higher plane, of course; we're beings of pure reason, are we not? The questions that concern us tend to be more of the form "Who's deserted University X for University Y?" & "Who's been spending weeks holed up in his room thinking about Conjecture Z?". And while I'm sure there are those of you who want to know more about the various research addictions triggered off by Razborov's proof of Bazzi's theorem, jobs news is probably of more general interest.

So here's what I've learned in the past week:

Emanuele Viola → Northeastern
Andrej Bogdanov → Chinese University of Hong Kong
Nisheeth Vishnoi → LRI, Paris
Neeraj Kayal → Microsoft Research, Bangalore
Brent Waters → University of Texas, Austin
Shang Hua-Teng → University of Southern California, Los Angeles
[Lance's Note: Rahul Santhanam → Edinburgh]

I rely on commenters to make good the omissions.

It certainly seems as if there's been more movement than usual on the job scene since STOC. The grad students and postdocs I've talked to seem pretty apprehensive about the market for next year, and with fair reason, I think. The market hasn't been that great for theory in the past couple of years, in any case, and the financial crisis seems likely to lead to funding cuts and more hiring freezes.

Perhaps there is some cause for optimism in the increasing number of postdocs available? The improvement in the NSF situation in the past couple of years means that more faculty in North America are able to hire postdocs. The emergence of Microsoft Research, Cambridge and the new Center for Computational Intractability in Princeton certainly won't hurt. But while having more postdocs around is good for our field, it might not be such a good thing from the point of view of the postdocs themselves. First, there is the intrinsic transience of the position - I had very good research environments in my postdocs at Simon Fraser and Toronto, but I never escaped the feeling of being in Purgatory. Second, there's the fact that job applications and interviews are very time consuming - it's hard to be productive when you know your entire future career might depend on how well you can advertise your research. And do we really want theoretical computer science to become like theoretical physics, where it's normal for graduating students to expect a postdoc apprenticeship of 6-7 years before they can find a permanent job?

For theorists not intent on getting a job in North America, the situation might be a little better. There are increasing opportunities in Europe and especially in Asia, as recent job news indicates. In general, the best attitude might be to be realistic about your prospects and to use the competitiveness of the market as motivation for your research.

Monday, October 27, 2008

Post-Lunch Highlights, Day 1

More from Rahul at FOCS

I realize I could quite easily spend more time writing about the talks then going to them, so I'll keep it shorter

  • Gil Kalai gave a vastly entertaining talk on "Elections can be Manipulated Often" (Kalai and Friedgut and Nisan). Clearly a topic of current relevance, but Gil bravely resisted making an Obama-McCain reference. The main result of the paper essentially says that the cumulative manipulation power (meaning their capacity to bring about their preferred global outcome by voting in a way that's different from their actual ranking of the candidates) in any "reasonable" election (where the voting scheme is far from yielding a dictatorship, and is insensitive to the names of the candidates) is invariably high, in a setting where voter preferences are modelled by random choices. The main open question from the talk was to prove that the current financial crisis was unavoidable.
  • Mihai Patrascu gave a talk on his paper "Succincter", which won the Machtey best student paper award. His paper greatly advances the state of the art in the field of succinct data structures. For instance, the paper shows how to store N "trits" (numbers that are either 0,1 or 2) in N log(3) + O(1) bits of memory with only logarithmic query time. The proof uses a clever recursive encoding scheme.
  • Dana Moshkovitz spoke about her Best Paper-winning work with Ran Raz on constructing sub-constant error 2-query PCPs of nearly linear size. This has been a major open problem for quite a while now, and the resolution of this open problem yields improvements to a large family of hardness of approximation results. The proof builds on earlier work by Moshkovitz and Raz constructing sub-constant error low degree tests of nearly linear size.
There's been precious little controversy at this conference, so I feel duty-bound to stir something up. There's an asymmetry between the two seminar rooms – one is considerably larger, wider and has better acoustics than the other. Is it a coincidence that the more complexity-oriented talks in Session B are in the smaller room, and the algorithms-oriented Session A talks in the larger not? I think not. This is merely the latest instance in the step-sisterly treatment accorded to complexity theory down the ages, right from when Euclid designed his algorithm but failed to analyze its complexity. I urge all right-minded readers to express their outrage at this state of affairs.

More seriously, can't think of a hitch; even lunch yesterday was good. Kudos to Sanjeev Khanna, Sudipto Guha, Sampath Kannan and the student volunteers.

Business Meeting

Another FOCS report from Rahul

The business meeting was compered by Paul Beame.

  1. The local chair Sanjeev Khanna spoke. 270 registered this year, as opposed to 250 last year and 220-230 the year before that. Excelsior!
  2. The P.C. report was given by the P.C. chair R.Ravi. Some salient features
    1. 349: not the number of reviewed papers, but rather the number of external reviewers
    2. June 13, 2008: the Night of the Long Knives, when as many as 150 papers had their hopes quashed.
    3. "Succincter": In an unprecedented double, Mihai's Best Paper Title awardee also wins the Machtey Best Student Paper award
    4. Dana Moshkovitz and Ran Raz win best paper for "Two-query PCP with Sub-Constant Error"
  3. Milena Mihail gave a presentation on FOCS 2009, which will be held in Atlanta from November 8-10 next year.
  4. Volunteers were requested for hosting FOCS 2010. Three hands shot up instantly, and after a long and contentious debate culminating in a beer-bottle fight... Nah.
  5. We all got to vote nine days before the big day. David Shmoys was elected vice-chair of the IEEE Technical Committee on Mathematical Foundations of Computing.
  6. A couple of major Prize announcements were made; no suspense involved for readers of this blog. Volker Strassen is the recipient of the Knuth Prize, and Spielman and Teng have won the Godel Prize for their paper on smoothed analysis of linear programming.
  7. NSF Report by Sampath Kannan, who's a Division Director in the CCF (Computing and Communications Foundations) program. We theorists seem to have a nice program of our own now directly under CCF called Algorithmic Foundations, which covers most of the traditional STOC/FOCS areas, and has a budget over $30 million for this year. Grant deadlines coming up pretty soon actually: for the Large grants on October 31, for the Medium grants shortly afterward in November, and for the Small ones in December. There was also some information on relevant cross-cutting funding opportunities.
  8. STOC 2009 will be held May 31- June 2 in Bethesda, Maryland. The submission server is already active. Title and short abstract are due November 10, extended abstract is due November 17.
  9. Pierre Mckenzie recapitulated CCC 2008 in 30 seconds, and then announced that CCC 2009 would be held in Paris. Paris, France, as a matter of fact; excited murmurs from the audience. For a brief moment there complexity theory was cool.
  10. David Johnson announced that SODA 2009 would be held January 4 - January 6 in New York, and that Volker Strassen would be giving his Knuth Prize lecture there.
  11. Miscellaneous announcements, including a long-overdue archiving of old FOCS proceedings in IEEE Xplore ( I believe the only ones left are '60,'61, '63 and '67) and news about the establishment of the Center for Computational Intractability in Princeton, funded by the NSF/CISE Expeditions grant announced earlier on this blog.
That was one of the duller business meetings in recent memory. Which is a good thing, of course – it means we're all happy and getting along.

Sunday, October 26, 2008

Pre-Lunch Session

More from Rahul

A typical FOCS day divides up the same way as a day in a cricket test match: pre-lunch session, afternoon session and post-tea session. And approximately the same fraction of the audience is awake and/or paying attention?

I set myself the goal of attending the first talk of the day, which naturally meant that I stumbled in around halfway through Manindra Agrawal's talk on "Arithmetic Circuits: A Chasm at Depth Four" (Agrawal and Vinay). This is one of my favorite papers in the conference: the result is surprising, easy to state and not hard to prove. The authors show that an exponential lower bound on arithmetic circuit size for depth-4 circuits actually implies an exponential lower bound on circuit size for general arithmetic circuits. There are precedents for this kind of depth reduction for arithmetic circuits (unlike for Boolean circuits), eg. the Berkowitz-Skyum-Valiant-Rackoff result that polynomial size circuits can be simulated by polylogarithmic-depth circuits, but the nice thing about this result is that it gives an explanation for why most recent progress on arithmetic circuit lower bounds has been in the very low depth regime. Manindra's talk was marred a little by technical glitches, but he has an unparalleled ability to stay cool in a crisis.

Then came an excellent talk on Madhur Tulsiani on "Dense Subsets of Pseudorandom Sets" (Reingold, Trevisan, Tulsiani and Vadhan). This is a topic that Luca has said a lot about in his blog, and I certainly can't hope to better that. The final talk in the session was by Timothy Chow on "Almost-natural Proofs", about how the natural proofs framework of Razborov and Rudich ceases to be a barrier to proving circuit lower bounds if it is relaxed slightly. The talk was good, but I was even more intrigued by the speaker. He is unusually wide-ranging in his choice of problems to work on - he comments quite often on the FOM mailing list, and I know he's worked among other things on logics for polynomial time. It's good for our heavily algorithms-and-complexity centered community to have people from outside the mainstream working on our problems.

You'll notice that most of the talks discussed so far are complexity-related - old habits die hard. But I did try compensating by going to Mihai Patrascu's talk on "Dynamic Connectivity: Connecting to Networks and Geometry" (Chan, Patrascu and Roditty), about data structures for dynamic connectivity in graphs where both vertex and edge updates are allowed. There did seem to be some elegant ideas involved, but the talk was geared for an audience which knew something already about dynamic data structures, and I didn't follow the details. This reminds me of why I don't tend to go to talks too far afield of areas I work in - presenters do tend to take certain things for granted in their audience, and it makes sense for them to do so, since most members of the audience probably have a reasonable background in the subject of the talk. Also, it wasn't the most enthusiastic of talks, but I am looking forward to Mihai's talks on "Succincter" and "(Data) Structures", both of which concern very natural questions that are interesting even to non-specialists in data structures.

Next for me was another algorithmic talk: Chris Umans on "Fast Modular Composition in any Characteristic" (Kedlaya and Umans), which showed that the composition of two polynomials f and g of degree n modulo a third polynomial of the same degree can be computed very efficiently, in time that's nearly linear in n. The proof goes through an earlier reduction by Chris of the problem to simultaneous evaluation of multilinear polynomials on several points; the new contribution is a clever solution to this simultaneous evaluation problem through recursively applying Chinese remaindering. The talk, as is usual with Chris, was crisp and well-organized.

That concluded my pre-lunch session – I wouldn't have been alert enough to do justice to the remaining talks. Stay tuned for a report on the rest of the day's play.

Sampling FOCS

Rahul Santhanam reports from Philadelphia.

I'm in Philly for FOCS, and will be reporting on the extravaganza for this blog, including the drama and suspense of the talks ("Will there be heckling?" "Will he make an Obama joke?" "Will she break the 20-minute barrier?") and the drunken revelry of the business meeting. Typically, I have a few simple rules for attending talks:

  1. No conflicts with my regular 4 am - noon bedtime.
  2. Attending a double-digit number of talks is just as good as attending no talks at all.
  3. The title better have the word "complexity" in it.
But with my reporting duties as an excuse, I've resolved to discard my old inertial lifestyle and tax my brain with the new and unfamiliar. Facility location, electronic commerce, quantum entangled games – bring it on!

You could help me with this. If you're a theory junkie who couldn't make it here and there's some talk you really wanted to hear, let me know and I might be able to give you some idea – the barest idea, or at the very least a fictional idea, of what it was like to be there…

Saturday, October 25, 2008

Knuth Prize to Strassen

The ACM announced that Volker Strassen will receive the Knuth Prize for his algorithmic work, most notably on primality, matrix multiplication and integer multiplication. The Knuth Prize is given for major contributions over an extended period of time. The winner gives a Knuth Prize lecture and Strassen will give his at SODA.

Strassen matrix multiplication is a simple but amazing recursion to beat the easy cubic bound. And the probabilistic algorithm for primality with Solovay drove complexity research into probabilistic computation and helped make modern cryptography possible.

Friday, October 24, 2008

Who to vote for?

I have not blogged much on the election because I doubt I can say anything you haven't already heard. I have gathered here reasons for/against the candidates that you might not have heard. They are NOT mine- they are things I've heard or read. If 1/2 of these comments are new to 1/2 of my readers, I'll be happy.

REASONS TO VOTE FOR OBAMA OR AGAINST MCCAIN
  1. I've always wanted a Prez younger than me!
  2. Joe Biden is one of the few non-millionaires in the Senate. Hence he understands the working man.
  3. If Obama wins then Hillary probably won't be able to run for 8 years. (If McCain wins she can run in 4 years.)
  4. In the first debate Obama showed that he was willing to agree with McCain on some things. McCain did not.
  5. McCain does not know how to use email.
  6. I have two bets that Obama will win. (One is with an African-American who thinks the country will never elect an African-American Prez. She hopes she is wrong and will gladly lose the bet.)
  7. The McCain-Palin campaign spent $150,000 on clothes for Sarah Palin. This shows she has no understanding of economics.
REASONS TO VOTE FOR MCCAIN OR AGAINST OBAMA
  1. I've always wanted a Prez older than my grandparents!
  2. Joe Biden is one of the few non-millionaires in the Senate and hence has no understanding of Economics.
  3. (Someone at my church told me this.) I live in the same Condo as John McCain. He voted for me for President of the Condo Association, so I feel I owe it to him to vote for him for President of America.
  4. Sarah Palin is a hockey mom with 5 kids and a job. Hence she understands the working mom.
  5. Obama is a Muslim who has been attending Reverend Wrights Christian Church for 20 years. (NOTE- 10% of Americans things Obama is Muslim. 1% think he is Jewish.)
  6. An intelligent man who I trust agrees with McCain on many things. That man: Barak Obama (First Debate and a few other things).
  7. The Republicans created the mess we're in now, make them clean it up.
REASONS TO VOTE AGAINST BOTH OF THEM
  1. I'm from a non-swing state so my vote does not matter.
  2. The Daily Show, The Colbert Report, Saturday Night Live, and The Capitol Steps won't have much to work with. Obama, McCain, and Biden just aren't that funny. Palin makes political satire redunant. (Alaska is next to Russia and therefore she has Foreign Policy Experience.- is that John McCain? Jon Stewart? John McCain on Jon Stewart's show?)
  3. Who do the Assoc of American Political Satirists endorse? Having read this I still can't tell.