Friday, June 29, 2012

Instance Compression

Some announcements: FOCS Accepts, ITCS Call and ToC Special Issue for Rajeev Motwani

I'd like to highlight one of the new FOCS papers "New Limits to Classical and Quantum Instance Compression" by MIT student Andrew Drucker which solves a problem that has dogged me for a few years. Ignore the "Quantum" in the title, that's not the interesting part.

Consider the OR-SAT problem, given m formulas each of size n, is at least one of them satisfiable. In STOC 2008, Rahul Santhanam and I showed that if there is a reduction from OR-SAT to any language L with the property that the reduction reduces to instances of size polynomial in n (independent of m) then the polynomial-time hierarchy collapses. The question arose from parameterized complexity though it also has some applications to cryptography and PCPs.

Our techniques failed to get a similar result for AND-SAT, where we want to know if all of the formulas are satisfiable. Drucker settled the question, showing that a compression of AND-SAT (reducing to instances of size polynomial in n) also gives a similar collapse of the hierarchy. What impressed me the most was the ingenuity Drucker uses in his proof.

Drucker derives the following lemma about probability distributions based on a strong version of Fano's inequality. Let A be a set of strings, D a distribution over A and f any function that maps Am to {0,1}m-2. If you pick a y from D, with high probability there is an index i, such that the distributions f(Dm) and f(Dm) with the ith location replaced by y are statistically close.

Drucker combines this lemma with a minimax argument and a result of Sahai and Vadhan that characterizes Statistical Zero-Knowledge as determining whether two distributions are different from the circuits that generate them.

Sometimes people solve your open questions with a simple argument that makes you hit your head for not thinking of it yourself. Other times you just sit back amazed at the solution knowing there was no hope for you to find it. Drucker's paper definitely falls in that second category.

Wednesday, June 27, 2012

Transitions

On July 1 I officially move jobs from Northwestern to Georgia Tech, a change not just in location but in role as I become a department chair.

I'm proud by what we put together in my 4 1/2 years at Northwestern, a strong group at the intersection of CS and Economic theory with Jason Hartline, Nicole Immorlica and great connections with the economists at Kellogg, the Northwestern business school.


So why leave? I've been thinking about taking on a more leadership role at this point in my academic career and when Georgia Tech came calling it was impossible to turn it down. But all moves are bittersweet and I'll truly miss the atmosphere we put together at Northwestern.


Also in the CS community, on July 1 I pass the mantle of SIGACT chair to Paul Beame. In exchange I'm joining the CRA board on that date.

This is a week of many conferences, ICML in Edinburgh, LICS in Croatia, Women in Theory in Princeton and the blog's namesake Computational Complexity in Porto. Having attended the first 26 meetings of the Complexity conference I miss it for the first time. Bill is there and hopefully he'll come back with a full report.

Eric Allender remains the only person to have attended every Complexity conference. "I'll continue to go as long as they are fun", he remarked last week. May he have reason to go for many more years.

Which brings me to the saddest transition of them all. Manfred Kudlek, retired professor at Hamburg, passed away on the bus for the excursion to Betchley Park last Monday during the Computability in Europe conference. Manfred is well known as the only person to have attended all 38 ICALP conferences, the Warwick meeting next month will seem empty without him.

Saturday, June 23, 2012

Alan Turing (1912-1954)

Alan Turing
Today we celebrate the 100th anniversary of the birth of Alan Turing, remembering his incredible life and mourning his tragic death. Alan Turing simply asked how does a mathematician solve problems. He developed such a simple model, now called the Turing machine, that so perfectly captures computation and years later would be the right model to capture computational complexity.

Alan Turing did so much more, codebreaker, computer builder and developer of his test of intelligence, that he so richly deserves the title "father of computer science". But it is the Turing machine that gives us the right notion of computation, both formal and intuitive and the foundation for a whole new research field.

A week ago ACM honored Turing by bringing together 32 recipients of the Turing Award, the highest honor in computer science. An incredible retrospective on Turing's influence on computer science from those who best followed in Turing's footsteps. You can celebrate the day by watching the webcast. I recommend the panel sessions on the Turing Computation Model and the Algorithmic View of the Universe. Just seeing these great theorists on stage together is a treat.

Today I have to privilege to celebrate Turing's legacy in Cambridge, England, where the Computability in Europe conference ends its meeting at King's College of Cambridge University. Turing was an undergraduate at King's College when he developed the computational model and wrote his truly classic paper. Everyone should read Chapter 9 where Turing gives an amazing argument why his model captures computation.

Thank you Alan. The world computes because of you.

Thursday, June 21, 2012

The Ketchup Problem


MIT scientists are working on a bottle of ketchup where you CAN get out every last drop.  See here for details and some nice videos of the new bottle in action. This will be used on other products (like Mayonnaise) and will save much thrown away food every year! However, IF this had been around 45 years ago, then I might never have gone into Math!!! Why is that?

When I was 8 years old I had the following thought:

When ketchup is at the end of the bottle it drips very slowly.  It never stops dripping hence the number of drops is infinite.  Could I put an empty bottle of ketchup below it and catch all of those drops and hence never run out of ketchup?


When I was in college I came up with the following explanations:

  1. NO: By the same reasoning there will never be an empty bottle of ketchup!
  2. NO: Each drop is half the size of the previous one so even though this is an infinite series it converges.
  3. NO: The time between drops doubles so it would take infinite time to fill up another bottle.


To this day I will NOT give up my conviction that the number of drops is infinite.  This ketchup problem may be the first time I saw a real world phenomena and made a math problem out of it. I blogged about other such problems here and here.
The ketchup problem so intrigued me that I became a math major. OR the fact that I thought of this problem indicates that I liked math and hence went into it. I would say its a chicken-and-egg thing but we need a new metaphor since
the chicken came first.

Other questions I thought of as a kid:

  1. Why do people smoke tobacco if its so well known that its bad for them?  This one still puzzles me.
  2. If, as the TV ads indicate, people want Margarine that tastes JUST LIKE BUTTER then why not just get butter? I NOW know that Margarine was cheaper back then (still is).
  3. If, as the TV ads indicate, people want instant coffee that tastes JUST LIKE GROUND ROAST, why not just get ground roast? I had no idea what any of those terms meant as a kid. Why I cared is a mystery since I didn't drink coffee as an 8 year old (and I still don't).  I now deal with math terms i don't understand. Why I care is a mystery.
  4. How is eating everything on my plate going to help the kids that are starving in Africa? Mom never quite answered that one.
  5. How come whenever I see six kids playing together there are always either three that get along or three that don't get along?


Had I pursued the first three questions I would either be in sociology or advertising. Had I pursued the fourth question I might be in economics or politics. I have no idea what pursuing the fifth question would have lead to. Child Psychology? Party planning?

READERS- do you recall a math problem that you cared about as a kid that likely either inspired you to do math or indicated that you were interested in math? Note that you don't need to have SOLVED the problem as a kid- I am interested in your interests,
not your ability (note that I didn't any of my problem as a kid).

Monday, June 18, 2012

Name My Book

As many of you know I have been working on a non-technical popular science book on the P versus NP for a general audience. The book is mostly finished but the biggest challenge is finding a good title.

The current working title is
The Golden Ticket: P, NP and the Search for the Impossible
The “Golden Ticket” refers to beginning of the Roald Dahl book Charlie and the Chocolate Factory (and the two movies based on the book) where everyone is searching for a golden ticket to get a factory tour. But the publisher is worried that people will not understand the reference until they read the book.

Here are other possible titles some from me and many from the publisher.
  • P vs NP: The Search for the Impossible
  • Finding the Needle: P, NP and the Search for the Impossible
  • The Road to Nerdvana: P, NP and the Search for the Impossible
  • The Second Most Famous Equation: P=NP and the Search for the Impossible (the first being E = mc2)
  • Can We Solve It? : P, NP and the Search for the Impossible
  • The Ultimate Math Problem: P, NP and the Search for the Impossible
  • How to Solve Everything and Why It's a Bad Idea: P, NP and the Search for the Impossible
  • The Most Important Equation You Never Heard Of: P=NP and the Search for the Impossible
  • The Answer to Everything
  • Impossible Math: The search for the solution of P, NP
  • One equation to rule them all: P, NP and the Search for the Impossible 
  • One Way Math: P, NP, and the Search for the Impossible
  • The Ultimate Puzzle: P,NP and the Search for the Answer to Everything
  • The Golden Ticket: The Impossible Equation that Could Solve All Problems
  • The Answer to Everything: The Ultimate Unsolvable Equation
  • The Golden Ticket: The Ultimate Unsolvable Equation that Could Be The Answer to Everything 
Honestly I'm not loving (or hating) most of them. We decided to ask my readers, many of you are in the target audience for the book. Any of the titles above that you truly love (or hate)? Or do you have another suggestion? If we use your title, a free signed copy of the book will be yours.

Thursday, June 14, 2012

Do 50-1 longshots in the Kentucky Derby ever come in?

(I delayed posting this until after The Belmont Stakes since I wanted to see if there would be a Triple Crown winner.
Alas, I'll have another was scratched. From what I've heard it was the right decision.)

Watching the Kentucky Derby my wife asked Do  50-1 shots every win? Since the Derby has been run 138 times it is likely that a 50-1 shot came in at least once.  That is, of course, if the odds were correctly figured.  Were they?

Essentially yes.  For the  ten longest longshots to win the derby I call a horse UNDERVALUED if they showed (came in 1st or 2nd or 3rd) in at least one of the other legs of the Triple Crown, and FLUKE if not. If there are other reason to say FLUKE I do so. Sometimes I say DON"T KNOW.

  1. Donerail won in 1913 and was 91-1. (The longest long shot to win.) The prob that a 91-1 shot would NEVER come in after 138 races is (1 - 1/91)138 which is roughly 0.217647740202952.  So while this is possible, it's more likely that such a long shot would come in. I could not find information on if Donerail ran in the Preakness or the Belmont Stakes.  Even though I don't know I'll say FLUKE with odds that long.
  2. Mine that Bird won in 2009 and was 50.6-1 The prob that a 50.6-1 shot would NEVER come in after 138 races is (1 - 1/50.6)138 which is roughly .0636355836570341.  Extremely unlikely, hence not at all surprising that such a horse won at some point, though surprising when it happens.  He finished second in the Preakness and third in the Belmont Stakes. UNDERVALUED!
  3. Giacomo won in 2005 and was 50.3-1. Two horses that were roughly 50-1 have won the Derby.  I leave it to the reader to calculate the prob that only one 50-1 horse wins. I suspect that it is quite low, so having two of them win is reasonable.  He finished third in the Preakness and seventh in the Belmont Stakes. UNDERVALUED. I recall at the time thinking that betting he would show in the Belmont Stakes would be a sure thing. Alas, there is no such thing as a sure thing.
  4. Gallahadion won in 1940 and was 35.2-1 (The website mislabels the picture as being from 2005 and misspelled the horses name by leaving out the a after the ll. What are the odds of that?) He finished third in the Preakness and did not show in the Belmont Stakes.  UNDERVALUED.
  5. Charismatic won in 1999 and was 31.3-1.  He won the Preakness and came in third in the Belmont Stakes. UNDERVALUED!
  6. Proud Clarion won in 1967 and was 30-1.  He finished third in the Preakness and Fourth in the Belmont Stakes. UNDERVALUED.
  7. Exterminator won in 1918 and was 29.6-1.  Wikipedia has nothing on the Preakness or Belmont Stakes, so I assume he didn't run them. DON"T KNOW
  8. Dark Star won in 1953 and was 24.9-1. A better name would have been Dark Horse. Was fifth in the Preakness possibly because he got injured running it, and that injury ended his career. DON"T KNOW.
  9. Thunder Gulch won in 1995 and was 24.5-1. He came in third in the Preakness and won the Belmont Stakes. UNDERVALUED!
  10. Stone Street won in 1908 and was 23.7-1.  Wikipedia does not have anything about his performance in the Preakness or the Belmont Stakes so I assume he didn't run in them. Wikipedia DOES report that his time, 2:15 1/5 is the slowest winner ever of the Derby. Based just on this I say FLUKE.
  11. Animal Kingdom won in 2011 and was 20-1. He finished second in the Preakness and didn't show  in the Belmont Stakes (I could not find how well he did).  UNDERVALUED

Some comments:

  1. To answer my wife's question- in 138 Derbies a 50-1 or better shot came in 3 times, which I think is about right, or at least not surprising. So they DO win.  Sometimes.
  2. I was surprised that 5 of the 10 longest longshots were since 1999.  I would think people know more about racing now and so a real long shot is less likely. That may be the wrong way to think about it.
  3. Should you bet on longshots? If you always bet x on the longest longshots in the Kentucky Derby, and (I have not checked this) The first, second, and third on the list above were the longest longshots when they ran, you would have roughly 190x - 135x = 55x dollars.
  4. How do you tell if a long shots is undervalued (the odds should have been better) or a fluke (the odds were correct but something weird happened)? Here are three ways: (1) See how they did in the other two legs of the Triple Crown, (2) See their runtime in the Derby, or (3) for each race see if there were odd odd circumstances. For example, sometimes, it all depends if it rained last night.
  5. By my measure there were six undervalued, two flukes, and two don't knows. I don't know if this means my measure is wrong.
  6. Actually I'm GLAD that long shots sometimes come in. If not then they are being overvalued.  This is similar to why David Pennock wrote Why we're happy we got one prediction wrong on Super Tuesday. I saw his talk the same week as the Derby, and both jointly inspired this post. What are the odds of that?
  7. I asked Dave Pennock about these issues and got two interesting thoughts from him:

    1. There is evidence racetrack odds are just about right (efficient) except with a slight "favorite-longshot bias" where favorites win a little too often and longshots not quite enough (people bet a little more than they should on longshots).
    2. It is an interesting hypothesis that odds should be less long with more information. I'm not sure that's true. With more information, the entropy of the distribution should go down, which intuitively might lead to longer long shots (and surer sure things)?
  8. Dave also told me that there has been A LOT of work on this.  Here are some references:

    1. There is a whole edited volume on the subject: Efficiency of Racetrack Betting Markets
    2. Horse racing Testing the efficient markets model by Wayne W. Snyder.  J. of Finance, V. 33, No. 4, 1978.  pp. 1109--1118.
    3. Probability and utility estimates for racetrack bettors by Mukhtar M. Ali. J. of Political Economy, V. 85, No. 4, 1977, pp. 803--816.
    4. Utility analysis and group behavior: An empirical study by Martin Weitzman.  J. of Political Economy, V. 73, No. 1, 1865, pp. 18--26.
    5. Anomalies: Parimutuel betting markets: Racetracks and lotteries by Richard H. Thaler and William T. Ziemba.  J. of Economic Perspectives, V. 2, No 2, 1988, pp. 161--174.
    6. Gambling and rationality by Richard N. Rosett.  J. of Political Economy, V. 73 No. 6 pp. 595-607, 1965.
    7. Informed traders and price variations in the betting market for professional basketball games by John M. Gandar, William H. Dare, Craig R. Brown, and Richard A. Zuber.
    8. Information incorporation in online in-game sports betting markets by Sandip Debnath, David M. Pennock, Steve Lawrence, Eric J. Glover, and Lee Giles.  Proceedings of the Fourth Annual ACM Conference on Electronic Commerce, pages 258-259, 2003.
    9. How accurate do markets predict the outcome of an event? the Euro 2000 soccer championships experiment by Carsten Schmidt and Axel Werwatz.  Technical Report 09-2002, Max Planck Institute for Research into Economic Systems, 2002.
    10. Here are some fun books on the subject: Sharp Sports Betting by Wong and Calculates Bets by Skiena. I reviewed Skiena's book in my SIGACT NEWS book review column here.

Monday, June 11, 2012

Fortnoy's Complaint

During my daughter's 8th grade graduation last week, the teacher reading the names said "Molly Fortnoy...I mean Fortnow". I felt for Molly. When my name was first mentioned in a conference talk back in 1986 the speaker also said "Fortnoy". For the record my name is pronounced as it is spelled, like "He's going to the fort now".

I've been called Fortnoy many times from people in my generation or older. I put the blame on Philip Roth's book Portnoy's Complaint.

My father's birth name was Paul Fortunow. The name has Russian roots, possibly connected to Fortunoff. The 'u' in Fortunow is silent. Go ahead and try and pronounce Fortunow. You forgot the "u" was silent. So a couple of years before I was born (and before Portnoy's Complaint) my father changed his last name to Fortnow.

Foreigners usually have no trouble pronouncing my name. But once when I was checking in at the Frankfurt airport, the agents said something like "Here are your tickets Mr. Fornow". I said "Thanks but that's Fortnow". She responded  "If it was pronounced Fortnow there would be a 'u' after the 't'". I should have asked her where she was from. 

Friday, June 08, 2012

A new bound on 3-hypergraph Ramsey Number! Why wasn't it discovered earlier?


When a new result is first discovered one question to ask is Why wasn't it discovered earlier? We look at a result in Ramsey Theory from 2010 and speculate as to why it was not discovered earlier.

Let R(k) be the least n such that no matter how you 2-color the edges of the complete 3-hypergraph on R(k) vertices there will be a homogenous set of size k (that is, a set H of size k such that all 3-sets from H are the same color). R(k) is known to exist by Ramsey's theorem.

  1. Ramsey's original proof (in 1930) gave astronomical tower-like bounds on R(k).
  2. Erdos-Rado (1954) showed that R(k) is bounded by 224k.
  3. Conlon-Fox-Sudakov (2010) showed that R(k) is bounded by 222k. (They did a lot more in their paper- they looked at asymmetric 3-Hypergraph Ramsey Numbers and developed a new framework for them.) See Hypergraph Ramsey Numbers on that page.)
Why are these results important?
  1. An improvement on the 3-hypergraph Ramsey numbers automatically gives an improvement in the a-hypergraph Ramsey numbers for any a ≥ 3.
  2. There is an exponential gap between the upper and lower bounds of R(k) and all of the a-hypergraph Ramsey numbers. Hence any improvement may be a key to closing that gap.
I have (with Andy Parrish and Sandow Sanai) celebrated the result of CFS with a survey of bounds on the 3-hypergraph Ramsey Numbers here. Proofs and intuitions and historical context of the results enumerated above are in our survey. (If you read it and have corrections please email them to us- we plan to put it on arXiv next week.) The techniques in this paper did not depend on any mathematics unknown to Erdos or Rado (or others) in 1954. (This is in contrast to, say, Conlon's paper A New Upper Bound on Diagonal Ramsey Numbers which uses quasi random graphs- a technique and technology unknown in 1954.) So why didn't someone else prove the result earlier? Our speculations may apply to other results that are proven with techniques that were already known.
  1. The CFS paper develops a framework for upper bounds on asymmetric 3-Hypergraph Ramsey numbers that involving a game (not a FUN game, but a game). This framework is used to prove many things. If all you want is the bound on R(k) then you don't need the framework and you get a proof that, IN HINDSIGHT, gives you the bound in a way that Erdos-Rado could have done. That's alot of HINDSIGHT. (Our Survey gives the stripped down proof of just that result, without their game framework.)
  2. Not that many people worked on it. Hence that individual people didn't get it is not surprising. Perhaps Erdos-Rado was happy enough to get the bound from tower to merely 224k. More generally--- if an entire community is looking at a problem then the question Why wasn't it found earlier? may have a global answer. If its just a few people, it could just be local things like After working on this problem she switched to Computable Algebraic Topology.
  3. CFS are very very clever. This is another HINDSIGHT argument- AFTER its done it looks easy.
Other reasons that might apply in other cases, though not to the case at hand:
  1. Timing and Luck should not be underestimated. A paraphrase from The Honors Class: Hilbert's Problems and their Solvers, the chapter on Hilbert's 10th problem, page 112:
    All of these people, Robinson, Davis, Putnam, others had been reading Number Theory books full of obscure facts. Matiyasevich had just read the third edition of Nikolia Vorob'ev's ``popular book'' Fibonacci Numbers. In that book, published in 1969, there was a new result about the divisibility of Fibonacci numbers. Matiyasevich used it to prove If F(n)2 divides F(m) then F(n)divides m. This allowed him to solve Hilbert's 10th problem (building on work of Davis-Putnam-Robinson). Robinson had read that book, but not the third edition!
    With the web is this more or less likely to happen? I ask nonrhetorically. (The spelling I give for Matiyasevich is the one given in the book. I have seen different spellings.)
  2. Everyone thought the negation was true. This was one reason why NL=coNL was proven as late as it was.
  3. The problem itself wasn't looked at earlier. Some of the Time-Space tradeoffs for SAT may be in this category.
  4. A groupthink sets in and people are all thinking the same way. Laszlo Babai noted this concern in his 1990 article Email and the unexpected power of interaction where he writes:
    But will the diversity of thought that now exists be preserved in an era when intellectual fashions are dictated by the strongest? Shall we see more Levins and Razborovs come out of the Kolmogorov school, bringing in so prominently different, yet profoundly relevant ideas?
    He was referring to email, but the tendency of groupthink may be even more of a tendency with the web. (Levins and Razborovs, not Levin's and Razborov's, is how it appears in the original article.)
SO, READERS- your turn! Leave comments on results that, once proven, the question arose Why wasn't this proven earlier? and speculate as to why. You need not write a survey of the area.

Thursday, June 07, 2012

Mihai Pătraşcu (1982-2012)

Update: Visit the Memorial Website

Mihai Pătraşcu passed away Tuesday after a bout with brain cancer. Even though he hadn't reached his 30th birthday, he had already produced a number of major achievements in the algorithmic community. The theoretical computer science community lost one of its brightest young stars.

During STOC, Yevgeniy Dodis arranged to have Mihai at the business meeting while we announced him as a co-winner of the EAPresburger award which Mihai will receive posthumously at ICALP next month. The standing ovation that followed made for the most emotional moment I have ever seen at an academic conference.

We asked Mohammad Taghi Hajiaghayi to write some personal thoughts on Mihai:

Last night I heard the very sad news that Mihai Pătraşcu has passed away at the age of 29. Though I knew about his cancer from 1.5 years ago and several steps that he took to fight the cancer still the news was quite shocking to me. Mihai's carreer was short but very productive with his lots of great papers in e.g., STOC, FOCS, and SODA.


Indeed Mihai is the second close friend of mine and a bright star researcher from my MIT times that I missed in the recent years (the first one was Misha Alekhnovich who died in 2006 in a white-water kayaking accident in Russia).


I became familiar with Mihai since 2002 when he joined MIT. He was interested in working on date structure with Erik Demaine who was my Ph.D. advisor as well. As a result, I got know him much better esp. since both of us had common experiences in IOI (International Olympiad in Informatics). One of Mihai's early ground-breaking papers is
Erik D. Demaine, Dion Harmon, John Iacono, Mihai Patrascu: Dynamic Optimality - Almost. FOCS 2004: 484-490
In this paper, an O(lg lg n)-competitive online binary search tree is given which improves upon the best previous competitive ratio of O(lg n). This was the first major progress on Sleator and Tarjans dynamic optimality conjecture of 1985 that O(1)-competitive binary search trees exist. Indeed Mihai could formulate the O(1)-competitive conjecture as an approximation factor of some algorithm and we worked together for some time to prove the conjecture (that so far we could not:)). This was my first research experience with Mihai. Since then we always have talked about research (but we never had a joint paper).


In general Mihai and I were close friends and talked about all things. For example some times the people asked me about Mihai's personality especially because of some posts in his blog that were a bit controversial. I always told them Mihai is a very nice guy to work or chat with and those posts are indeed some concerns that everyone has in this mind, but Mihai is just brave enough to mention them loudly in his blog for the hope of some changes in the way that our community think (and anyone else including myself may agree or disagree with them).


After Mihai's graduation with Ph.D. from MIT which was only for 2 years (the same was true for Misha Alekhnovich), he applied for lots of places and got several very good offers. I was one of the people who encouraged him to join AT&T research labs, the place that I was there at that time. Finally Mihai decided to decline several faculty offers and join AT&T esp. to work with Mikkel Thorup (before joining AT&T he went as a Raviv Postdoctoral Fellow to IBM Almaden for one year). He has also chosen AT&T because of its place near to NYC that was good for his wife's career. As a result since July 2009, I was very happy to have Mihai, a star researcher, as my colleague at AT&T research labs and we talked more about everything during the past years. For example, later I wanted to teach hashing in my Introduction to Algorithms course at UMD; I asked Mihai and he gave me a very good overview about the field including an overview of his recent ground-breaking paper on simple tabulation hashing.


I first knew about Mihai's cancer in Feb 2011. At that time he was taking chemo before a brain surgery in March 2011. Just a day after the surgery he sent an email to David Johnson from the hospital that he already recovered and started thinking about his research problems. After this surgery, doctors, Mihai, and all of us at AT&T were very hopeful that the bad days for Mihai have been passed and Mihai will recover fully and produce lots of papers again:). Indeed everything was fine since then and Mihai even applied for faculty positions in Fall 2011 (he always considered his AT&T (industry) position as a long-term post-doc position). He asked for my opinion about his research statement and I was happy to give him some comments (esp. since I was also a person who first took the path of the industry and then academia). Again everything was fine until mid Jan 2012 that Mihai understood the cancer is back. Unfortunately this time the cancer was very strong and in a week or two he went to a wheelchair from a completely healthy position. I saw him several times at AT&T recently (when I visited there) and I always was very sad to see him on a wheelchair. To my mind, the best good news for Mihai since January 2012 was the fact that Mihai was selected as a co-winner of the 2012 EATCS Presburger Young Scientist Award for his ground-breaking work on data structure lower-bounds. He was very happy about it as he posted in his blog about it as well.


At the end I pray that Mihai's soul rests in peace and the research area of data structure lower-bounds in which he had several ground-breaking works becomes more prosperous due to his work.

Monday, June 04, 2012

Which Books to Keep?

Moving is an excuse to go through your possessions and weed out what you don't need anymore. Over my professional life I've collected two large bookcases full of CS, Math and Econ books and all the STOC, FOCS and Complexity proceedings from 1986 until they stopped publishing proceedings a couple of years ago. I also have a surprisingly large collection of complexity Ph.D. theses.

What do I move and what do I toss or give away? All the proceedings are now on-line. Maybe keep STOC 1987, my first conference paper? How many different editions of Li and Vitanyi's Kolmogorov tome do I need? How many Introduction to Theory textbooks? Publishers send them to me since it is the one undergraduate class I have consistently taught. I use Sipser, partly because he was my Ph.D. advisor, but mostly because it's a great book.

One approach is to toss everything. As some of my students say, if it's not on the web it can't be of much value. But I can't imagine life without some of the classics. When I want to prove something NP-complete, I still start by finding the closest related problem in Garey & Johnson to reduce from. For that one needs to skim through their list of problems. I have yet to see a good way to skim electronically.

I'm not a luddite. I do all my pleasure reading on the Kindle and read and mark up PDFs on my iPad. But when it comes to math books, we still haven't found a good replacement for paper.