Friday, October 31, 2008

Focs Wrap Up

(Guest post from Rahul Santhanam)

As promised, I did do a bit of experimenting this FOCS in terms of attending talks from other areas. Here's an encapsulation of the results:

Cryptography - Yevgeny Vahlis spoke about his negative result with Boneh, Papakonstantinou, Rackoff and Waters on black-box constructions of identity-based cryptosystems (cryptosystems in which an arbitrary string can be used as a public key) from trapdoor permutations. ID-based cryptosystems have been extensively studied recently in work of Dan Boneh and others. Security of known constructions is proved in the random oracle model, and it would be interesting to base security on the same assumptions as those used in standard public-key cryptography. This result indicates this might be too much to hope for, at least using standard techniques.

Learning - Adam Klivans gave an excellent talk on "Learning Geometric Concepts via Gaussian Surface Area" (with O'Donnell and Servedio). There's been a lot of interesting work recently on learning in the presence of random or adversarial noise. It's known that functions with low noise sensitivity, or even with low Gaussian noise sensitivity, can be learned efficiently in this setting, but these quantities are hard to bound in general. The current work resolves some open problems about learnability of geometric concept classes by using some deep mathematical results relating Gaussian noise sensitivity to Gaussian surface area, and working with the latter more tractable quantity.

Games with Entangled Provers - I went to a couple of talks on another currently hot topic: the power of two-prover games with entangled provers. A semi-definite programming formulation due to Tsirelson can be used to show that the optimal value can be computed efficiently for a very simple form of 2-prover games, known as XOR games. The first talk that I attended was given by Thomas Vidick on "Entangled Games are Hard to Approximate" (with Kempe and Kobayashi and Matsumoto and Toner) - he showed a reduction from the problem of approximating the value of 2-prover games for classical provers (known to be NP-complete) to the problem of approximating the value of 3-prover games for entangled provers. A complementary talk by Ben Toner on "Unique Games with Entangled Provers are Easy" (with Kempe and Regev), showed that for unique games, the optimal value can actually be approximated efficiently using "quantum rounding" to a semi-definite program. The most interesting open problem in this area seems to be to derive some sort of upper bound on the complexity of approximating the value for general games.

Algorithms - I heard Dimitris Achlioptas speak on "Algorithmic Barriers from Phase Transitions" (with my colleague Amin Coja-Oghlan). This paper tries to understand the reason why simple algorithms for constraint satisfaction problems on random instances fail in the region around the threshold, by showing that this "region of failure" coincides with the region where the structure of the solution space changes from being connected to being (essentially) an error-correcting code. Rather intriguing that a computational fact, the success or failure of standard algorithms, is so closely related to a combinatorial artifact. Finally, I went to Grant Schoenebeck's talk on proving lower bounds for the Lasserre hierarchy of semi-definite programming formulations of CSP problems, which is interesting because a number of different algorithmic techniques including the Arora-Rao-Vazirani formulation of Sparsest-Cut can be implemented in the lower levels of the hierarchy. Grant's result uses a connection to the width complexity of resolution, which I found very surprising, but then my understanding of this area is rather naive...

All interesting stuff, but I'd actually prefer FOCS to be even broader, and representative of all areas of theory. If that requires multiple sessions, so be it. Currently, FOCS seems to be an algorithms, complexity and combinatorics conference with papers from other areas filtering in unpredictably depending on the composition of the committee. It's very creditable that the standard of papers has remained consistently high (or even improved) over the years. But with several major sub-areas (semantics and verification, concurrency theory, database theory, theory of distributed computing, computational geometry, computational biology etc.) represented barely or not at all, it's quite hard to still make the case that FOCS and STOC are absolutely central to theoretical computer science as a whole. I guess the question is how much we value FOCS and STOC being core conferences?

Thursday, October 30, 2008

Election Special

A hodgepodge of election stuff, in my last post before November 4.

The markets and polls both suggest that the election is not that close. There's no question who will win in my (and Obama's) home state. The senate race here is even less interesting with the Republicans not even bothering to mount a serious campaign against the incumbent and Majority Whip, Dick Durbin. At least the house race in my district is tight with the incumbent, Mark Kirk, a moderate republican running against Dan Seals, an up and coming African-American. Sounds familiar.

Before voting, check out the answers Obama and McCain both gave to Science Debate 2008. Keep in mind that science will likely get a back seat given the tight budget constaints that we will have in the next several years. McCain's promise to freeze most federal spending will be particularly bad for science.

In the category of "Good Searching means Less Privacy" comes the Contributions Search Engine on the New York Times site. Type in a relatively unique name like "Fortnow" and you'll discover my $255 donation to the Obama campaign. In full disclosure, I also donated $100 to Obama in the primaries.

So why $255? Did I really mean to donate the maximum I could in one unsigned byte? Actually $255 = $1000/4 + $5, with the $5 coming from one of my daughters who wanted to add her own contribution. I still live in a base-ten world.

Wednesday, October 29, 2008

Math NAMES

(A quick note: Howard Karloff is organizing an excursion to the South Pacific during SODA. Sign up by November 3.)

And now for todays post:

What do the names Pascal, Hilbert and Mittag-Leffler have in common, aside from all being names of math folks?

I have AN answer in mind. You may very well give me other ones. I'll be curious if anyone give me the one I had in mind.

Tuesday, October 28, 2008

The Gossip Center

Rahul Santhanam continues to spread the word from FOCS

Conferences are not really about the talks, they're about socializing and gossip. Who's having an affair with whom? Who's been drinking too much? It's for answers to questions like these for which we brave jet lag, find others to teach classes for us, endure conference food&hellip.

Well, not quite. Poets and artists might gossip about such things, but scientists are on a higher plane, of course; we're beings of pure reason, are we not? The questions that concern us tend to be more of the form "Who's deserted University X for University Y?" & "Who's been spending weeks holed up in his room thinking about Conjecture Z?". And while I'm sure there are those of you who want to know more about the various research addictions triggered off by Razborov's proof of Bazzi's theorem, jobs news is probably of more general interest.

So here's what I've learned in the past week:

Emanuele Viola → Northeastern
Andrej Bogdanov → Chinese University of Hong Kong
Nisheeth Vishnoi → LRI, Paris
Neeraj Kayal → Microsoft Research, Bangalore
Brent Waters → University of Texas, Austin
Shang Hua-Teng → University of Southern California, Los Angeles
[Lance's Note: Rahul Santhanam → Edinburgh]

I rely on commenters to make good the omissions.

It certainly seems as if there's been more movement than usual on the job scene since STOC. The grad students and postdocs I've talked to seem pretty apprehensive about the market for next year, and with fair reason, I think. The market hasn't been that great for theory in the past couple of years, in any case, and the financial crisis seems likely to lead to funding cuts and more hiring freezes.

Perhaps there is some cause for optimism in the increasing number of postdocs available? The improvement in the NSF situation in the past couple of years means that more faculty in North America are able to hire postdocs. The emergence of Microsoft Research, Cambridge and the new Center for Computational Intractability in Princeton certainly won't hurt. But while having more postdocs around is good for our field, it might not be such a good thing from the point of view of the postdocs themselves. First, there is the intrinsic transience of the position - I had very good research environments in my postdocs at Simon Fraser and Toronto, but I never escaped the feeling of being in Purgatory. Second, there's the fact that job applications and interviews are very time consuming - it's hard to be productive when you know your entire future career might depend on how well you can advertise your research. And do we really want theoretical computer science to become like theoretical physics, where it's normal for graduating students to expect a postdoc apprenticeship of 6-7 years before they can find a permanent job?

For theorists not intent on getting a job in North America, the situation might be a little better. There are increasing opportunities in Europe and especially in Asia, as recent job news indicates. In general, the best attitude might be to be realistic about your prospects and to use the competitiveness of the market as motivation for your research.

Monday, October 27, 2008

Post-Lunch Highlights, Day 1

More from Rahul at FOCS

I realize I could quite easily spend more time writing about the talks then going to them, so I'll keep it shorter

  • Gil Kalai gave a vastly entertaining talk on "Elections can be Manipulated Often" (Kalai and Friedgut and Nisan). Clearly a topic of current relevance, but Gil bravely resisted making an Obama-McCain reference. The main result of the paper essentially says that the cumulative manipulation power (meaning their capacity to bring about their preferred global outcome by voting in a way that's different from their actual ranking of the candidates) in any "reasonable" election (where the voting scheme is far from yielding a dictatorship, and is insensitive to the names of the candidates) is invariably high, in a setting where voter preferences are modelled by random choices. The main open question from the talk was to prove that the current financial crisis was unavoidable.
  • Mihai Patrascu gave a talk on his paper "Succincter", which won the Machtey best student paper award. His paper greatly advances the state of the art in the field of succinct data structures. For instance, the paper shows how to store N "trits" (numbers that are either 0,1 or 2) in N log(3) + O(1) bits of memory with only logarithmic query time. The proof uses a clever recursive encoding scheme.
  • Dana Moshkovitz spoke about her Best Paper-winning work with Ran Raz on constructing sub-constant error 2-query PCPs of nearly linear size. This has been a major open problem for quite a while now, and the resolution of this open problem yields improvements to a large family of hardness of approximation results. The proof builds on earlier work by Moshkovitz and Raz constructing sub-constant error low degree tests of nearly linear size.
There's been precious little controversy at this conference, so I feel duty-bound to stir something up. There's an asymmetry between the two seminar rooms – one is considerably larger, wider and has better acoustics than the other. Is it a coincidence that the more complexity-oriented talks in Session B are in the smaller room, and the algorithms-oriented Session A talks in the larger not? I think not. This is merely the latest instance in the step-sisterly treatment accorded to complexity theory down the ages, right from when Euclid designed his algorithm but failed to analyze its complexity. I urge all right-minded readers to express their outrage at this state of affairs.

More seriously, can't think of a hitch; even lunch yesterday was good. Kudos to Sanjeev Khanna, Sudipto Guha, Sampath Kannan and the student volunteers.

Business Meeting

Another FOCS report from Rahul

The business meeting was compered by Paul Beame.

  1. The local chair Sanjeev Khanna spoke. 270 registered this year, as opposed to 250 last year and 220-230 the year before that. Excelsior!
  2. The P.C. report was given by the P.C. chair R.Ravi. Some salient features
    1. 349: not the number of reviewed papers, but rather the number of external reviewers
    2. June 13, 2008: the Night of the Long Knives, when as many as 150 papers had their hopes quashed.
    3. "Succincter": In an unprecedented double, Mihai's Best Paper Title awardee also wins the Machtey Best Student Paper award
    4. Dana Moshkovitz and Ran Raz win best paper for "Two-query PCP with Sub-Constant Error"
  3. Milena Mihail gave a presentation on FOCS 2009, which will be held in Atlanta from November 8-10 next year.
  4. Volunteers were requested for hosting FOCS 2010. Three hands shot up instantly, and after a long and contentious debate culminating in a beer-bottle fight... Nah.
  5. We all got to vote nine days before the big day. David Shmoys was elected vice-chair of the IEEE Technical Committee on Mathematical Foundations of Computing.
  6. A couple of major Prize announcements were made; no suspense involved for readers of this blog. Volker Strassen is the recipient of the Knuth Prize, and Spielman and Teng have won the Godel Prize for their paper on smoothed analysis of linear programming.
  7. NSF Report by Sampath Kannan, who's a Division Director in the CCF (Computing and Communications Foundations) program. We theorists seem to have a nice program of our own now directly under CCF called Algorithmic Foundations, which covers most of the traditional STOC/FOCS areas, and has a budget over $30 million for this year. Grant deadlines coming up pretty soon actually: for the Large grants on October 31, for the Medium grants shortly afterward in November, and for the Small ones in December. There was also some information on relevant cross-cutting funding opportunities.
  8. STOC 2009 will be held May 31- June 2 in Bethesda, Maryland. The submission server is already active. Title and short abstract are due November 10, extended abstract is due November 17.
  9. Pierre Mckenzie recapitulated CCC 2008 in 30 seconds, and then announced that CCC 2009 would be held in Paris. Paris, France, as a matter of fact; excited murmurs from the audience. For a brief moment there complexity theory was cool.
  10. David Johnson announced that SODA 2009 would be held January 4 - January 6 in New York, and that Volker Strassen would be giving his Knuth Prize lecture there.
  11. Miscellaneous announcements, including a long-overdue archiving of old FOCS proceedings in IEEE Xplore ( I believe the only ones left are '60,'61, '63 and '67) and news about the establishment of the Center for Computational Intractability in Princeton, funded by the NSF/CISE Expeditions grant announced earlier on this blog.
That was one of the duller business meetings in recent memory. Which is a good thing, of course – it means we're all happy and getting along.

Sunday, October 26, 2008

Pre-Lunch Session

More from Rahul

A typical FOCS day divides up the same way as a day in a cricket test match: pre-lunch session, afternoon session and post-tea session. And approximately the same fraction of the audience is awake and/or paying attention?

I set myself the goal of attending the first talk of the day, which naturally meant that I stumbled in around halfway through Manindra Agrawal's talk on "Arithmetic Circuits: A Chasm at Depth Four" (Agrawal and Vinay). This is one of my favorite papers in the conference: the result is surprising, easy to state and not hard to prove. The authors show that an exponential lower bound on arithmetic circuit size for depth-4 circuits actually implies an exponential lower bound on circuit size for general arithmetic circuits. There are precedents for this kind of depth reduction for arithmetic circuits (unlike for Boolean circuits), eg. the Berkowitz-Skyum-Valiant-Rackoff result that polynomial size circuits can be simulated by polylogarithmic-depth circuits, but the nice thing about this result is that it gives an explanation for why most recent progress on arithmetic circuit lower bounds has been in the very low depth regime. Manindra's talk was marred a little by technical glitches, but he has an unparalleled ability to stay cool in a crisis.

Then came an excellent talk on Madhur Tulsiani on "Dense Subsets of Pseudorandom Sets" (Reingold, Trevisan, Tulsiani and Vadhan). This is a topic that Luca has said a lot about in his blog, and I certainly can't hope to better that. The final talk in the session was by Timothy Chow on "Almost-natural Proofs", about how the natural proofs framework of Razborov and Rudich ceases to be a barrier to proving circuit lower bounds if it is relaxed slightly. The talk was good, but I was even more intrigued by the speaker. He is unusually wide-ranging in his choice of problems to work on - he comments quite often on the FOM mailing list, and I know he's worked among other things on logics for polynomial time. It's good for our heavily algorithms-and-complexity centered community to have people from outside the mainstream working on our problems.

You'll notice that most of the talks discussed so far are complexity-related - old habits die hard. But I did try compensating by going to Mihai Patrascu's talk on "Dynamic Connectivity: Connecting to Networks and Geometry" (Chan, Patrascu and Roditty), about data structures for dynamic connectivity in graphs where both vertex and edge updates are allowed. There did seem to be some elegant ideas involved, but the talk was geared for an audience which knew something already about dynamic data structures, and I didn't follow the details. This reminds me of why I don't tend to go to talks too far afield of areas I work in - presenters do tend to take certain things for granted in their audience, and it makes sense for them to do so, since most members of the audience probably have a reasonable background in the subject of the talk. Also, it wasn't the most enthusiastic of talks, but I am looking forward to Mihai's talks on "Succincter" and "(Data) Structures", both of which concern very natural questions that are interesting even to non-specialists in data structures.

Next for me was another algorithmic talk: Chris Umans on "Fast Modular Composition in any Characteristic" (Kedlaya and Umans), which showed that the composition of two polynomials f and g of degree n modulo a third polynomial of the same degree can be computed very efficiently, in time that's nearly linear in n. The proof goes through an earlier reduction by Chris of the problem to simultaneous evaluation of multilinear polynomials on several points; the new contribution is a clever solution to this simultaneous evaluation problem through recursively applying Chinese remaindering. The talk, as is usual with Chris, was crisp and well-organized.

That concluded my pre-lunch session – I wouldn't have been alert enough to do justice to the remaining talks. Stay tuned for a report on the rest of the day's play.

Sampling FOCS

Rahul Santhanam reports from Philadelphia.

I'm in Philly for FOCS, and will be reporting on the extravaganza for this blog, including the drama and suspense of the talks ("Will there be heckling?" "Will he make an Obama joke?" "Will she break the 20-minute barrier?") and the drunken revelry of the business meeting. Typically, I have a few simple rules for attending talks:

  1. No conflicts with my regular 4 am - noon bedtime.
  2. Attending a double-digit number of talks is just as good as attending no talks at all.
  3. The title better have the word "complexity" in it.
But with my reporting duties as an excuse, I've resolved to discard my old inertial lifestyle and tax my brain with the new and unfamiliar. Facility location, electronic commerce, quantum entangled games – bring it on!

You could help me with this. If you're a theory junkie who couldn't make it here and there's some talk you really wanted to hear, let me know and I might be able to give you some idea – the barest idea, or at the very least a fictional idea, of what it was like to be there…

Saturday, October 25, 2008

Knuth Prize to Strassen

The ACM announced that Volker Strassen will receive the Knuth Prize for his algorithmic work, most notably on primality, matrix multiplication and integer multiplication. The Knuth Prize is given for major contributions over an extended period of time. The winner gives a Knuth Prize lecture and Strassen will give his at SODA.

Strassen matrix multiplication is a simple but amazing recursion to beat the easy cubic bound. And the probabilistic algorithm for primality with Solovay drove complexity research into probabilistic computation and helped make modern cryptography possible.

Friday, October 24, 2008

Who to vote for?

I have not blogged much on the election because I doubt I can say anything you haven't already heard. I have gathered here reasons for/against the candidates that you might not have heard. They are NOT mine- they are things I've heard or read. If 1/2 of these comments are new to 1/2 of my readers, I'll be happy.

REASONS TO VOTE FOR OBAMA OR AGAINST MCCAIN
  1. I've always wanted a Prez younger than me!
  2. Joe Biden is one of the few non-millionaires in the Senate. Hence he understands the working man.
  3. If Obama wins then Hillary probably won't be able to run for 8 years. (If McCain wins she can run in 4 years.)
  4. In the first debate Obama showed that he was willing to agree with McCain on some things. McCain did not.
  5. McCain does not know how to use email.
  6. I have two bets that Obama will win. (One is with an African-American who thinks the country will never elect an African-American Prez. She hopes she is wrong and will gladly lose the bet.)
  7. The McCain-Palin campaign spent $150,000 on clothes for Sarah Palin. This shows she has no understanding of economics.
REASONS TO VOTE FOR MCCAIN OR AGAINST OBAMA
  1. I've always wanted a Prez older than my grandparents!
  2. Joe Biden is one of the few non-millionaires in the Senate and hence has no understanding of Economics.
  3. (Someone at my church told me this.) I live in the same Condo as John McCain. He voted for me for President of the Condo Association, so I feel I owe it to him to vote for him for President of America.
  4. Sarah Palin is a hockey mom with 5 kids and a job. Hence she understands the working mom.
  5. Obama is a Muslim who has been attending Reverend Wrights Christian Church for 20 years. (NOTE- 10% of Americans things Obama is Muslim. 1% think he is Jewish.)
  6. An intelligent man who I trust agrees with McCain on many things. That man: Barak Obama (First Debate and a few other things).
  7. The Republicans created the mess we're in now, make them clean it up.
REASONS TO VOTE AGAINST BOTH OF THEM
  1. I'm from a non-swing state so my vote does not matter.
  2. The Daily Show, The Colbert Report, Saturday Night Live, and The Capitol Steps won't have much to work with. Obama, McCain, and Biden just aren't that funny. Palin makes political satire redunant. (Alaska is next to Russia and therefore she has Foreign Policy Experience.- is that John McCain? Jon Stewart? John McCain on Jon Stewart's show?)
  3. Who do the Assoc of American Political Satirists endorse? Having read this I still can't tell.

Thursday, October 23, 2008

Academic Dominance

Maureen Dowd's column yesterday mentioned that Obama was being accused of reading a book about the end of America written by a "fellow" Muslim. Turns out I read the same book, Fareed Zakaria's The Post-American World.

Not so much about the end of America, but the gradual ending of America's economic dominance in the world. Zakaria contrasts America today with the rise and fall of the British empire. The book focuses on China and India, their recent rise and the challenges each faces, as well as suggestions for America to keep their competitive edge.

At the University level, Zakaria seems quite bullish on America especially in CS.

The situation in the sciences is particularly striking. A list of where the world's 1000 best computer scientists were educated shows that the top ten schools are all American. US spending on R&D remains higher than Europe's, and its collaborations between business and education institutions are unmatched anywhere in the world. American remains by far the most attractive destination for students taking 30 percent of the total number of foreign students globally. All these advantages will not be erased easily, because the structure of European and Japanese university—mostly state-run bureaucracies—is unlikely to change. And while China and India are opening new institutions, it is not that easy to create a world-class university out of whole cloth in a few decades. Here's a statistic about engineers that you might not have heard. In India, universities graduate between 35 and 50 Ph.D.'s in computer science each year; in America, the figure is 1,000.
We have seen some countries like Israel create world-class universities out of whole cloth in a few decades. The US did it themselves in the late 19th century. So China and India could have dramatical success at the university level if they make the commitment to resources and change. Until recently the Indians and Chinese have come in large numbers to our graduate programs and have just stayed in the US. Now, for may reasons not the least of which is the improving economic conditions in both countries, we are seeing more researchers heading back to their native countries whether it be Turing Award winner Andy Yao or just a large number of Indian scientists that are moving or plan to move back to India. Imagine the changes we've seen at Israeli universities in a country with a 10-digit population size.

Wednesday, October 22, 2008

Weird Sum/ith largest/Wallet-- revisited

  1. In Richard Matthew McCutchen's Guest post he challenged the readers to figure out a certain sum. And I challenged them to help me do a better pi in html.
    1. KKMD (comment 3) is correct, it is the largest prime that divides n. The formal proof is here.
    2. pi: I originally used an & followed by pi which yields &pi
    3. pi: I was supposed to use & followed by pi and then a semicolon which yields π
    4. pi: Lance says to use < span style=" font-family:times" > & pi;</span> which yields π
  2. Some of the comments from the posting on ith largest of n inspire some random thoughts from me:
    1. Why on earth would anyone be doing a computer search for such algorithms? (for algorithms to find ith largest of n with as few comparisons as possible). One hope is that with enough empiricial evidence we may get EXACT values for how many comparisons it takes. Also, for the challenge! But YES, limited practical value. But see next point.
    2. Why do you think your conjecture is true? The known algorithms for finding ith largest of n take n+(i-1)log n + O(1) and begin by making comparisons pairwise. For i small this is optimal up to the O(1). So in this realm it seems likely. But what is `i small'? When finding the 10th largest out of 40 is that more like i small or like you are finding the n/4th largest element? Don't know- want to find out. Another reason to do this- when is small small?
    3. A commenter says there is interesting info in a Tech Report that may be hard to find. The notion of a Tech Report that is hard to find may be unfamiliar to young people. With the web it may be easier to find some unpublished papers then some published ones, depending on who the authors are and the journals are. (If someone knows where the Journal version of Barrington's paper on Bounded Width BP containing NC1 is online please let me know. Barrington does not have it on his website or know where it is online.)
  3. Lance recently recommended a certain wallet in this blog. On his advice I bought it and its great. What I wonder is, how big a mover and shaker is Lance? Should they have given him a free wallet since he influences others? How many others have bought it based on his recommendation? At Maryland Theory Day there was a talk about how sellers should give people of influence discounts since they will influence others to buy their product. This is not a new idea, but with modern technology it can be better targeted.

Tuesday, October 21, 2008

World Series at FOCS

The timing of the FOCS conference in late October often overlaps with the World Series and on rare occasions they happen in the same city at the same time. Checking back through the 48 previous FOCS only once before in 1997 FOCS was in Miami Beach October 19-22 with Game 2 of the world series October 19th with the Florida Marlins losing to Cleveland at Pro Player Stadium in Miami.

A couple of other close calls. In 1992, I had tickets to a potential series in Pittsburgh but I've told that sad story before. In 1999, FOCS was held in New York October 17-19 and the Yankees hosted Atlanta a week later in the series. Last year FOCS was held in Providence October 20-23 and just a short drive up I-95 the Red Sox beat the Rockies twice October 24-25.

This year we have a perfect overlap. The 2008 FOCS Conference runs October 25-28 in downtown Philadephia and Games 3, 4 and 5 (if necessary) of the Series between the Phillies and the Tampa Bay Rays will be held October 25-27 three miles south in Citizens Bank Park.

But I'll miss the fun and won't make FOCS this year. I'd love to hear from any FOCS attendee that manages to snag World Series tickets and gets to enjoy the two great fall classics.

Monday, October 20, 2008

An interesting Summation

(Guest post by Richard Matthew McCutchen.)

The formula below appears in The TeXbook as a typesetting exercise (I have slightly modified it): The expression has a clever mathematical meaning that is not discussed in the book. Try to find it and prove it!

Let p(n) be the limit as m &rarr &infin, m &isin Z, of

&sumk=0...&infin (1- cos{2m}(k!nπ/n))

(Comment from Bill: &pi is supposed to be pi, the ratio of circumference to diameter of a circle. html doesn't really do a good pi- if someone knows how to, in html, do a better one- let me know.)

Friday, October 17, 2008

Finding the ith largest of n numbers for i,n SMALL

Exactly how many comparisons does it take to find the ith largest of n numbers? For i=1 and i=2 these numbers are known exactly. Beyond that I'm not sure, though I do know that we don't know (say) how many comparisons to find the 15th largest element out of 30. (There is a table in KNUTH, but I could not find a website of these things. If someone knows one, please comment.)

The following conjecture, if true, would help speed up computer searches for such algorithms and may lead to interesting math in and of itself. Let V(i,n) be the min number of comparisons (worst case) to find the ith largest of n numbers.
Conjecture: There is an algorithm for finding the ith of n numbers that uses V(i,n) comparisons that begins by comparing x1 to x2, x3 to x4, x5 to x6, ..., xn-1 to xn (if n is even, else end at xn-2 to xn-1).
A weaker conjecture may be more tractable, where we allow V(i,n) + 1 or + some constant.

Thursday, October 16, 2008

How Much is a Proof?

Back in our grad schools days, Noam Nisan told me he preferred incomplete proof write-ups in papers. By being forced to fill in the missing steps, he could really understand what was going on in the proof.

I don't agree with Noam. I shouldn't have to struggle to figure out how the author went from point A to point B. I've spent far too many hours trying to understand the logical jump when the author says ``Clearly'' or ``Obviously''. On the other hand I don't want the authors to spell out every small algebraic manipulation either. And it's just completely infeasible to give a fully formal logical proof of even the simplest theorems.

So what level of proof should one give? As a general rule you should write for the reader. What would make it easier for him or her to understand your paper? When you leave out some details are you doing that because it would clutter your proof or because you are trying to save yourself some time. If it is the latter you do no one any favors.

Don't make assumptions of your readers. If you use a supposedly ``well-known'' technique, then either spell it out or make it clear what you are doing with a reference for those of us unfamilar with such techniques.

And how many times have you read ``the full details will appear in the final version'' where there are no later versions? Put those details in now. If you hit a proceedings page limit, have a full paper on the web with a footnote in the conference version pointing there.

If you do have a technically messy proof, the technicalities often overshadow the very pretty new ideas you needed for the proof. Be sure to also give a proof sketch that brings those ideas to light.

But mostly the Golden rule applies—Write your proofs as you would like others to write proofs for you.

Wednesday, October 15, 2008

Theory Day at University of Maryland

University of Maryland had its 20th THEORY DAY. It was organized by Samir Khuller Samir Khuller and Azarakhsh Malekian.
  1. There was a theme! All of the talks were on Game Theory/Social Networks/Auctions/Internet stuff. Most theory days do not have a theme, but this one was organized in a different way- Azar knew that some of these people would be in town for INFORMS.
  2. Having a theme was GOOD. Usually at theory day (anywhere) I have to get my head into a certain mode of thinking and then it changes with every talk. Here my head was in the same mode all day.
  3. One odd thing- I got 4 `short introductions to Game Theory'.
  4. Jason Hartline and Nicole Immorlica gave talks. I had never met them before so I got to see if they looked like their images in the this famous poster. They looked more like themselves then Lance did, but not much. Then Jason turned his back and bent it a bit and THEN he looked like the poster.
  5. Here are the talks:
    1. Mohammad Mahdian. Yahoo! Research. Externalities in Online Advertising.
    2. Nicole Immorlica. Northwestern University. The Role of Compatibility in Technology Diffusion on Social Networks.
    3. Konstantinos Daskalakis. Microsoft Research. Computing Equilibria in Large Games.
    4. Jason Hartline. Northwestern University.
    5. Vahab Mirrokni. Google. Submodular Optimization: Maximization, Learning, and Applications.
    6. Amin Saberi. Stanford University. Game Dynamics, Equilibrium Selection and Network Structure.
  6. They all gave excellent talks. Why was that? Well, for one thing, they were excellent speakers. But also this is a relatively young field so the work is still close to the motivation (there are fields of math where the motivation has been forgotten over time!). And the motivation is itself easy to explain.
  7. The conference was intentionally the same time as INFORMS so that people who were going there anyway could give talks and/or attend our Theory Day.
  8. Kudos to Samir and Azar for organizing it!

Tuesday, October 14, 2008

The Fiscal Crisis

Sitting in our Ivory Towers, our day-to-day lives don't change much even when dramatic events happen in the "real world." Even with the current economic crisis my basic job of teaching and research much go along as they have before. Whether P = NP does not depend on the GDP (though the converse might be false).

But in the end it does all come down to money. University endowments have shrunk. Alumni donations will surely drop. Less tax revenues will hurt government support of public universities. Industrial research labs may also take a hit if their corporate parents have financial difficulties.

Schools like Northwestern have structured their finances so they can weather short term shocks in the economy but a prolonged recession or depression will start to take its toll.

New faculty hiring will likely be the first victim in universities. Tenured professors may delay their retirement after taking a hit in their CREF accounts holding up slots for new hires. Unversities worried about their long-term finances may also slow down opening up new positions.

On the other hand we should see increased enrollments on every level which may push the need to hire more CS faculty. Computer Science has lost its lustre in recent years, but still reliably produces jobs and there is a flight to safe majors in times of economic turmoil.

People who have trouble finding a good job often go back and get their Master's while they wait out the recession. We should also get more people interested in Ph.D.s as alternatives dry up. My sources told me we have seen a major drop in Indian IIT graduates going on to Ph.D.s because banks have offered them high salaried positions. With the banking industry taking the biggest hit, we welcome those IITers back to academics.

Finally in every crisis, we look for someone to blame, and some blame the algorithms.

Somehow the genius quants – the best and brightest geeks Wall Street firms could buy – fed $1 trillion in subprime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and – poof! – created $62 trillion in imaginary wealth. It's not much of a stretch to imagine that all of that imaginary wealth is locked up somewhere inside the computers, and that we humans, led by the silverback males of the financial world, Ben Bernanke and Henry Paulson, are frantically beseeching the monolith for answers. Or maybe we are lost in space, with Dave the astronaut pleading, "Open the bank vault doors, Hal."

Monday, October 13, 2008

Thanks for better upper bound. Still want better lower bound- On Comm Comp of MAX

I posted on the max problem a while back.
Alice has x, an n-bit integer. Bob has y, an n-bit integer. They want to both know, max(x,y). This can be done with a deterministic n + \sqrt{2n} + O(1) bits. Can they do better?
Anon Comment 5 on that post, and possibly a paper that was pointed to, lead to a Deterministic Protocols that took n+O(log(n)) bits. I am presenting it carefully in this post, hoping that someone will NOW prove the lower bound OR get an even better upper bound.

When I had the upper bound of n +\sqrt{2n} + O(1) I thought it was optimal. Now that I have the upper bound of n+O(log n) + O(1) I'm not so sure. This is a good example of why lower bounds are so interesting- someone clever may come along with a better algorithm-- How to you prove that they can't?
  1. Alice has x, Bob has y. L is a paramter to be determined later. We will assume L divides n. Let m=L/n. Alice views x as x1...xm where each xi is length L. Bob views y as y1...ym where each yi is length L.
  2. Alice sends to Bob a string A of length L that is NOT any of x1,...,xm. Bob sends Alice a string B of length L that is NOT any of y1,...,ym. Note: we will need 2L > n/L.
  3. So long as it looks like the strings are identical They alternate sending the next block: Alice sends x1, Bob sends y2 Alice sends x3, Bob sents y4 (NOTE- nobody has to send a bit saying `we are tied'.) They keep doing this until one of them notices that the strings are NOT equal. If they they never discover this then they spend n+Lb bits and discover x=y. They both know max since its x=y.
  4. If Bob notices that xi > yi then he will send B0. Alice knows that B can't be one of Bob's segments, so she knows why Bob send this. She will then send the rest of her string that she has not already send. They both know max(x,y). This took n+3L bits.
  5. If Bob notices that xi < yi then he will send B1. Alice knows that B can't be one of Bob's segments, so she knows why Bob send this. Bob will then send the rest of his string. They both know max(x,y). This took n+4L bits.
What can L be? we needed 2L > L/n. L=lg n works. So this works in n+4lg n. You can do slightly better by taking L=lg n - lglg n.

Friday, October 10, 2008

Patents and Copyrights vs Web Traffic (guest Post by Amir Michail)

(Guest post by Amir Michail)

Title: Patents and Copyright vs Web Traffic

Software patents are intended to reward innovation, especially by startups with limited resources. But in today's Web 2.0 world of social sites such as twitter, a service is more useful when it has more users. Even if the implementation contains something that is worthy of a patent, getting such a patent is not particularly important because the service will generally have an overwhelming advantage over its clones in terms of number of users -- new users will generally pick the service with the most users, thus resulting in much more web traffic growth for the original service.

Similarly, one could argue that copyright is unnecessary on the web -- those who copy are unlikely to get anywhere near as much web traffic as the original source. For example, someone may copy posts verbatim from a popular blog, but it is unlikely that he/she will get anywhere near as much traffic. In particular, people already linked to the original source, thus giving it higher PageRank.

Admittedly, there are occasions where things don't work out like this and where patents and/or copyright would have been helpful. Consequently, we could have a law that requires search engines to provide a link back to the original source if any for each search result. This would be done using a completely automated heuristic matching algorithm. In this way, those who copy are even less likely to get anywhere near as much traffic as the original source.

Thursday, October 09, 2008

A Victim of the Internet

I come from the perhaps the last generation of those who mainly got their news from newspapers. I could spend a morning enjoying the Sunday paper and I still like to pour over the paper over breakfast. But the newspaper industry has been hard hit from the Internet with much less advertising particularly with classifieds going to places like Craigslist and fewer and fewer people reading the physical paper with young people getting their news from the Internet and the myriad news (and comedy) channels and TV. Newspapers have had to shrink staff and reduce the quantity and quality of their product.

This crisis hit home last week with the redesign of the Chicago Tribune. I understand the need for a paper to freshen up its image and can look beyond the extra color and fluff. But it is what's missing that bothers me. Two sections (Business and Local News) have been merged into the first section with business news more focused on personal finance than business. The Sunday Perspective (The Tribune's version of the Week in Review) has become a 3-page editorial/op-ed. But most dramatically is a significant drop in the coverage of real national and international news.

The Tribune hit its nadir last Saturday. The day after the bailout passed the house and was signed by the president, the front page had only one story—on the dating habits of suburban teens. The bailout story was on page 16.

The Tribune picked an interesting time to have their redesign, a month before the elections and with both Chicago baseball teams about to start their (short-lived) march into the playoffs. Perhaps they felt few would unsubscribe during this time. I went ahead and subscribed to the New York Times (instead of just skimming it online) because I still crave real news with my coffee. I'll get both papers for a while and if the Tribune doesn't recover I may have to say goodbye to my old friend.

Wednesday, October 08, 2008

Contradictions in Math-bad, in Political Pundits- Standard

(This is not a partisan post. I am using a Republican example since it illustates the point so well.)

In most (all?) serious fields of academia it is not acceptable to directly contradict yourself. No so in the field of political pundits. Karl Rove (and others) recently did this as the following clip illustrates. My question is, why do they get away with it? Some thoughts.
  1. I only saw this on The Daily Show. No other media seems to have picked up on it. (Some blogs did.) Karl Rove will not be challenged on this. FOX NEWS will not call him into their office and ask him how he could say contrary things.
  2. Rove is only talking to fellow conservatives. Its like Cold Fusion worskhop or a Parapsychology conference or an Intelligent design journal or a Bush Rally- your audience is pre-picked.
  3. People think (perhaps correctly) that all pundits do it, so nobody is particularly criticized if they do it. (I've seen this reasoning used to defend negative ads.)
  4. If Karl Rove was challenged he might say The Daily Show ran that piece because they are part of the liberal media elite.
  5. Most people have not had a course in basic logic to tell them when two statements are clearly contradictory. (Though common sense should suffice.)
  6. There is an end justifies the means mentality. If Karl Rove helps get McCain elected, then that is all that matters.
  7. Is there any serious field of academia where people can make contrary statements and not be called on it?

Tuesday, October 07, 2008

Expectation of Expectations

Can we quantitatively judge the effect of a debate on the election? One reasonable approach would look at prediction markets. If the value of the security for Obama to win would greatly increase after the debate in the absense of significant other factors, one could conclude that the debate went well for Obama, or at least that he did better than expected.

Intrade, the Irish real-money prediction markets site, went one further and set up a new security 2ND.DEBATE.OBAMA based on the outcome of tonight's presidential debate. In short,

This contract will be settled according to which Presidential Candidate receives the largest increase in value following the Presidential Debate on Tuesday October 7th. The pre-debate value will be calculated for 2008.PRES.OBAMA and 2008.PRES.McCAIN and then compared with the post-debate value for each of these contracts. This contract will expire at 100 if the value of 2008.PRES.OBAMA increases more than the value of 2008.PRES.McCAIN. This contract will expire at 0 if this is not the case.
A similar security for the VP debate paid off zero as there was a slight bump for McCain suggesting Palin did better than expected.

But there is a flaw in the logic for such a security—The market should already take into account the expectation of the debate. In general for any random variable A, the expectation of the expectation of A is the same as the expectation of A, i.e., E(E(A))=E(A).

So, assuming that these securities represent real probabilities, should the value of 2ND.DEBATE.OBAMA before the debate be 50? Not quite. Suppose that there is a 75% chance that the Obama stock goes up 5 and a 25% chance it drops 15. Then the price of 2ND.DEBATE.OBAMA should be 75%. The price of the security indicates who has the least chance of major downside from the debate. But to think one could use this security to predict how well the debate will go makes little sense.

The press release suggests using this security for real-time debate tracking. The price of 2ND.DEBATE.OBAMA should gyrate dramatically on small changes of 2008.PRES.OBAMA. But one could just derive this directly from 2008.PRES.OBAMA and since the later security has much greater liquidity it will likely give better information.

Chris Masse suggested another explanation.

[Intrade] is trying to induce traders into trading more (so as to get more transaction fees), and this new contract offers more upsides for the traders who will bet right. One could double one's initial investment in a matter of 2 days.
In which case all the power to them.

Monday, October 06, 2008

The Unexpected Hanging Paradox-any seriosu math their?

(20th Maryland TCS Day Tue, Oct 14, 2008 9:30am - 4pm)

The liar paradox is very similar to the serious math of Godel's Incompleteness theorem. Berry's Paradox is very simlar to the serious math of the Kolmogorov function (map x to the length of the shortest description of x) being noncomputable.

Is their any serious math that is similar to the Unexpected Hanging Paradox? Here is the paradox:

A judge tells a condemned man that he will hang on either M, Tu, W, Th, or Friday at 1:00PM And that he will be surprised when it happens. His lawyer tells him not to worry: The hanging can't happen on Friday, since if he is alive by Thursday at 1:01PM then he knows it will be on Friday and hence won't be surprised. However, he also knows he can't be hung on Thursday since Friday is out, so if by Wedensday at 1:01PM he alive he will KNOW that it is Thursday. Hence he won't be surprised. Hence it can't be Thursday. Proceeding like this he reasons that he can't be hung. On Thursday the judge comes with the noose- and the prisoner is surprised.

I'm not asking for a resolution-- it seems to be a significant problem for philosophy-- however, I am asking: Is there some serious math that is similar to this paradox?

Friday, October 03, 2008

A minor but perhaps indicative failure of technology

In Feburary of 2007 needed to look up on Hallmark Channel Website if they had some program changes. They did not have any program changes on the website but they did have a number to call. The subwebsite had the number 1-888-390-7474. I called it and it said:
You have reached viewer services at the Hallmark Channel. In order for us to better serve our viewers, this mailbox is specialized designed to answer most of your questions. If you are having trouble viewing the Hallmark Channel, press 1. For a list of program changes, press 2.
I thought GREAT!. I pressed 2.
The following changes have been made to our program scheduled as of Wedensday June 29, 2005. Jane Doe-The Harder they Fall, originally scheduled to air Friday July 1, 2005 9:00 Eastern and Pacific, 8:00PM Central, and 7:00PM Mountain time, has been replaced by Matlock-The Power Brokers.
They had not updated their phone message in two years. I left a message (If you would like to leave a comment, press 4) telling them of their error. I have since called on April 1, May 1, and June 1 of 2007 to see if they have corrected it. They had not. I've also emailed them, to no effect. (Its been fixed since then.)

How could a phone line be this out of date? How could the website maintainers not know this?
  1. They got careless and are not suffering for it so they don't know they got careless. I'm not going to stop watching their shows because their website gives a phone number that is two years out of date.
  2. Lack of coordination- the right hand does not that the left hand stopped working two years ago.
For this particular issue, the breakdown of technology is not important. But it troubles me that in other domains it might be.

Thursday, October 02, 2008

Go Sox

After beating three different teams in three days to make the playoffs, my White Sox join the Cubs in an historic time in Chicago: The first time both Chicago baseball teams are in the postseason since that great 1906 World Series. The Sox open the playoffs today against the surprising AL East winner Tampa Bay.

With the Boston Red Sox that makes all three teams that I have once held season tickets (I lived walking distance from Wrigley for a year and I organized season tickets for the theory group at MIT as a student there). And add on Milwaukee a team I have grown to like as they play in new stadium that I can get to easier than either of the Chicago ball fields and you can't beat the sausage race. And four more teams that hopefully will all be eliminated in the first round. This is just the ultimate playoff year to keep me distracted from various fiscal crises and political activities. I might not get much research done in October.

The eyes of America look at the Cubs who last won a World Series in 1908, one hundred years ago. But although I now teach north of the city, my heart remains with the Southsiders. The best outcome: The White Sox win the world series in five games. Why five and not the full seven? Because it is so much more fun to see the Cubs choke at home.

Wednesday, October 01, 2008

When does Alice deserve to be a co-author?

When has someone done enough work to deserve a co-authorship? Here are a few scenarios I have seen. I will say what did happen, not what should have happened.
  1. Alice comes up with the problem and makes some obvious observations about it, she shows it to others, who solve it. She writes it all up. (Alice got the co-authorship.)
  2. Alice comes up with the problem and makes some obvious observations about it, she shows it to others, who solve it. She does not write it up but does some proofreading. (I've seen this twice- once she got the co-author, once not.)
  3. Bob shows Alice a problem, Alice makes one observation that is the key. She has put only 2 minutes of work into the paper. (Alice got the co-authorship.)
  4. Bob has already written up the 50 page paper and it has covered all of the cases except for one. Alice solves the one case left. It was not a hard case at all. (Alice did not get the co-authorship.)
  5. Bob and Carol solve the problem together. Alice is in the room and does not contribute anything, but her tenure case is coming up soon and she is a bit short on papers. (Alice got the co-authorship. This is a rare scenario.)
  6. Alice proofreads a paper and finds a serious flaw in it, but fixes tht flaw. It was a rather subtle flaw and needed very clever argument to fix it. (Alice got the co-authorship.)
  7. Alice reads Bob's paper and comes up with a generalization that Bob does not care about, but wouldn't really be worth a paper on its own. (The paper did not have the generalization and Alice did not get the co-authorship.)
  8. Alice makes a key observation that makes all of the proofs shorter. The Conference version has her as a co-author. However, while writing up the journal version it is noticed that Alice's contribution was not correct. It is removed and the journal version is pretty much what it would have been had Alice not been involved at all. (Alice asked to be taken off of the journal version The other authors were surprised by this and did not mind keeping her as an author. They pointed out that there were other people on the paper that were even less involved. But they agreed to take her off on her insistence.)