Monday, October 31, 2005

Making Yourself Known

An assistant professor asks
How do I get on program committees and editorial boards?
PC chairs and editors-in-chief usually have several excellent candidates to choose from so you really have to make yourself stand above the crowd. How do you do this?

Prove. Easy said than done, but no better way to make yourself known than by proving great theorems. Quality counts more than quantity. Be sure to make your results public, by submitting them to sites like ECCC as well as putting them on your own homepage.

Talk. When you give a talk, take the time to prepare, sell your work, make the talk understandable and audience-appropriate. Someone told me recently they treated every talk like a job talk. Not bad advice.

Meet. Go to workshops and conferences not for the talks but to meet people. Don't just hang out with people from your own university. Skip some sessions, hang out in the hallways and talk to whomever is around. Reconnect with your old colleagues from graduate school and make an effort to meet new people. Have lunch and dinner with people you don't know.

Write. Write up your research well so people enjoy rather than suffer when reading your papers. Put some effort into your introductions and really sell the importance of your work. In addition write a survey paper, write a book, write a weblog. Get others to view you as an expert in the field.

Organize. Organize a workshop, do local arrangements for a conference or other service to the community. I don't recommend this route for assistant professors as it takes considerable time and won't help your tenure case much.

Wait. Be patient. Your time will come.

Sunday, October 30, 2005

List Decoding Beyond Expectations

The recent FOCS conference had two best paper award winners, the Khot-Vishnoi paper I had mentioned in my post on unique games and Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time by Parvaresh and Vardy.

List decoding considers codes where we have too many errors in the code to uniquely decode a string. These algorithms creates a short list of all of the possible decodings. Last year I had discussed list decoding in my Favorite Theorems series and said

Guruswami and Sudan give a polynomial-time algorithm that handles what is believed to be the best possible amount of error.
Guruswami and Sudan's algorithm works for the standard Reed-Solomon codes and is likely the best possible for that code. Parvaresh and Vardy develop a new but related encoding to get an improvement on the rate of the code, the ratio of the original message length to the code length. Guruswami and Sudan show how to list decode a 1-ε fraction of errors using Reed Solomon codes with a rate of O(ε2) while Parvaresh and Vardy achieve the same errors with their code with rate O(ε/log(1/ε)).

Friday, October 28, 2005

Algorithms for a Ten-Year Old

Yesterday my fifth-grade daughter, doing her math homework, asked
Is there a faster way to find greatest common factors other than with factor trees?
I live for these days. After I impressed her with Euclid's algorithm she asked
Is there a faster way to factor than with factor trees?
I thought for a while and then just answered "No."

Wednesday, October 26, 2005

Selling Theory

Thanks to Rocco for bringing us the news from FOCS, particularly a comprehensive summary of the business meeting. I am glad to have watched my White Sox in the World Series live and read a recap of the business meeting than the other way around.

A number of bloggers including Scott, Suresh, Sariel and physicist David Bacon have weighed in on the big panel discussion on how to generate interest in theoretical computer science. There was a big push for our community to publish in Science and Nature. I have seen more than a couple rather mediocre CS papers in Science. We need more than to just send our papers to these journals, we need members of our community on the editorial board.

The most interesting comments came from science writer Sara Robinson.

There is a perception among science editors that TCS is not what people want to read about: they want stories about health, things that cost a lot of taxpayer dollars, etc.
The New York Times, which Sara Robinson has written for in the past, used to give good coverage to research in theoretical computer science. But their Tuesday section Science Times has moved over the last couple of years from a general covering of science to a focus on medicine, environment and astronomy. Not just computer science but physics and chemistry get far less coverage than they once did.

What scares me most is what I see from incoming undergraduates. Far more high school students use computers now than say ten years ago but far fewer of them know how to program. Computer science is a victim of its own success, by making computers powerful, easy to use and well-connected, we have turned computers into a commodity like cars with the mystique of computation and complexity lost on the users.

Page Six

Final guest post from FOCS attendee Rocco Servedio.

Well, another FOCS has come and gone. In 50 years -- 100, tops -- we will know which papers from the conference stood the test of time. But meanwhile on to more pressing matters. Last week Lance promised that I would provide "all the gossip from the conference;" I'd hate to disappoint, so here goes. All names have been changed to protect the guilty, and pronouns should not be used to infer gender. Presenting...


  • POTTED PROFESSOR: WHICH thirsty theorist drank so much beer at the business meeting that his PhD students had to help him back to his room? The greedy guzzler was next spotted Monday afternoon nursing a mug of black coffee in the back row and wincing at microphone feedback.

  • DINNER DILEMMA: WHICH graph theory guru created an scheduling snafu when he separately told two rival gangs of theorists that he'd "meet you in the lobby in 15 minutes?" Let's hope his administrative assistant at Famous University manages things better when he's on his home turf.

  • ENOUGH ALREADY! WHICH logorrheic logician went so far over time that he "practically had to be dragged off the podium?" Our sources say the session chair was scant seconds from pulling the projector plug when the babbling bore finally zipped it.

  • HEARTBREAKER: WHICH complexity theorist Casanova has a love life that's more complicated than the proof of the Parallel Repetition Theorem? It seems there's no lower bound on this cheating cad's bad behavior.

Sadly, as you've probably guessed, none of these things actually took place (or if they did I didn't know about it; if so that's even sadder). In all seriousness, thanks to Lance for letting me post these last few days; it was fun, especially the chance to branch out into fiction-writing at the end.

Tuesday, October 25, 2005

Knuth Prize

The 2005 Knuth Prize was awarded to Mihalis Yannakakis of Columbia University. Mihalis's Knuth prize lecture was on "Probability and Recursion."

Monday, October 24, 2005

FOCS Business Meeting

Notes from the FOCS 2005 business meeting, reported by Rocco Servedio.

  • Yuengling, Sam Adams, Aspen Edge (low-carb).

  • Local arrangements: There were 144 non-student registrants and 141 students for a total of 285 registered attendees. Thanks to Avrim Blum and Anupam Gupta for a job well done on local arrangements. An interesting fact: doing local arrangements ourselves saves about $100/person on registration fees.

  • PC report: There were 268 submissions. 7 papers were retracted (an all-time high?); two of these were because of bugs found by the PC and five were initiated by the authors. Authors are encouraged to submit correct papers. There are 62 papers in the proceedings; 3 pairs of papers were merged (these papers got extra space in the proceedings). Distribution of authors (multiple papers means you get counted multiple times): 121 from U.S.A., 23 from Israel. 6 from Canada. 3 from Denmark, Italy, 2 from Germany, India, Czech, Hungary, 1 from Poland, Netherlands, Japan. Average # of authors per paper is 2.64 (or 2.48 depending on how you count merged papers). There were 7 single-author papers.

  • The two papers that were selected for Best Paper awards are "The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into \ell_1" by Subhash Khot and Nisheeth Vishnoi, and "Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time" by Farzad Parvaresh and Alexander Vardy.

  • Two papers shared the Machtey Award for the best paper authored solely by students. These were "On the Complexity of Real Functions" by Mark Braverman and "On the Complexity of Two-Player Win-Lose Games" by Tim Abbott, Daniel Kane, and Paul Valiant.

  • As all readers of this weblog know, the next CCC (Computational Complexity Conference) will be held in Prague from July 16-20, 2006. The submission deadline is December 4.

  • LICS 2006 will held in Seattle in August 2006 as part of the Federated Logic Conference.

  • FOCS 2006 will be held in Berkeley; Sanjeev Arora will be PC chair.

  • Following an entertaining "Star Wars" themed bid, it was decided that FOCS 2007 will be held in Providence.

  • STOC 2006 will be held May 20-23, 2006 in Seattle. The submission deadline is November 3 (so stop reading this weblog and get back to work).

  • SPAA 2006 will be held July 30-August 2, 2006 in Cambridge, MA.

  • There was a panel discussion on "Exciting the public about (theoretical) computer science." The panelists were Bernard Chazelle, Dick Lipton, Ivars Peterson (writes about math and computer science for Science News), Sara Robinson (freelance writer in math and CS), Steven Rudich, and Eva Tardos; Avrim Blum moderated the discussion. A few fragmentary snapshots from the discussion:

      Chazelle: Computing has never been more important, and never been more misunderstood. We are not doing as good a job of getting our work into the public eye as other fields such as physics. If the broader scientific community comes to use algorithms as a conceptual framework for the problems they are dealing with, we will have done well.

      Lipton: We have lots of really profound and interesting intellectual challenges; one way to excite the public is to talk to them about these fundamental questions.

      Rudich: How do we take what are doing and translate it into problems that people can relate to and care about? We have a million forms of encoding and should be able to do this; everyone can relate to the problem of trying to pack items into a suitcase of limited size.

      Tardos: Whatever you do, it is probably possible to explain it to the public. There is an awful lot of stuff we do that is really not that hard to explain. A straw poll of the audience showed that very few people in our community have ever published in Science or Nature; it would be good if this could change.

      Peterson: Publicity takes effort. The American Chemistry Council is spending twenty million dollars on advertising to sell the importance of research in chemistry. Astronomy often gets the front page of The New York Times; this is because of carefully orchestrated arrangements behind the scenes. The ACM, SIAM, IEEE do no publicity that I (Peterson) am aware of as a journalist. To get into the media: publish in Science and Nature. Lay language summaries and images are provided to the media a week in advance of each issue. There is always a Nature story in the newspaper on Thursday and a Science story on Friday. For newspaper coverage, one writer or a very small group can make a difference.

      Robinson: Even all the approaches suggested above will have only a limited effect. Two reasons for this: (1) Theoretical computer science is hard to understand for the lay public and for reporters (and, as one audience member shouted out, for us). It is easier to write about global warming or why the coyotes are multiplying. (2) There is a perception among science editors that TCS is not what people want to read about: they want stories about health, things that cost a lot of taxpayer dollars, etc. Perhaps we should explore new models such as a dedicated math/science news agency?

      (anonymous science writer audience member): "People like dinosaurs, asteroids, and things coming out of the ground...very little of what you guys have is concrete."

  • Finally, Dick Karp gave a report on behalf of the SIGACT committee on funding for theoretical computer science. The main goal of the committee is to improve stature and support of TCS within NSF. Based on a sample of 90 TCS investigators receiving funding between 2000 and 2004, 23% of funding came from TCS Foundations of Theoretical Computer Science and 55% came from ITR grants (now terminating). The average number of grants/year received per investigator was 2.4, and the median grant size per investigator per year was $70K. The 2005 TCS budget is about $6M. Some concrete items on the agenda for the future are to make a well-documented case that TCS is underfunded and to move TCS up a level in the CCF hierarchy.
  • Sunday, October 23, 2005

    First day of FOCS

    A guest post from FOCS attendee Rocco Servedio.

    Thanks to Lance for giving me this opportunity to fill in. I'm in Pittsburgh for FOCS, which started today and runs through Tuesday. Unfortunately because of travel constraints I couldn't make it to the Frieze Fest or FOCS tutorials that took place yesterday.

    Pittsburgh is a city which sometimes gets a bad rap, but I've always enjoyed coming here. Besides the obvious -- at least for this blog's readers -- attraction of CMU, there are lots of neat things that you can't find anywhere else. I personally like the National Aviary, and the Carnegie museums are fun too. At the Warhol museum you can watch movies that make even the driest FOCS talk seem like a Jerry Bruckheimer production.

    No time for museums today, though; the FOCS schedule is full of cool-looking talks, and tonight there is the business meeting and a panel discussion on "Exciting the public about (theoretical) Computer Science." I'll give a report on the business meeting and panel discussion later.

    Thursday, October 20, 2005

    Math in Complexity

    Another guest post by Bill Gasarch
    Combinatorics is a branch of Computer Science
    First episode, Second Season of NUMB3RS
    I could have a post on how stupid this statement is, I'd rather ask the following better question:
    How much do we use Combinatorics in Complexity Theory? Do we use anything else?
    Towards this end I looked through the COMPLEXITY 2005 proceedings and assigned to each paper what sort of math I thought they used. I'm sure some can be argued. But here are the results:
    • Probability: 8
    • Combinatorics: 6
    • Linear Algebra: 6
    • Abstract Algebra: 4
    • Number Theory: 2
    • Diagonalization and Simulation: 2
    • Calculus: 1
    • Dynamic Programming: 1
    • Philosophy: 1
    1. Very little continuous math. Even the Linear Algebra is usually over a finite field.
    2. Very little Diagonalization/Simulation. This is part of the general trend away from methods of logic. I suspect there was a lot more of those at the first STRUCTURES back in 1986.
    3. More abstract algebra than I would have thought, but I don't know if this is unusual or not.

    Wednesday, October 19, 2005

    Football Schools

    I am spending most of this week at the University of Nebraska for a talk and a workshop. What does Nebraska have to do with Notre Dame, where I visited last month? Both are traditional football powerhouses, a place where the sport dominates the school and more Americans know these universities for their teams than their academics. I've heard most universities actually lose money on their football programs (though Notre Dame is an noted exception). Still schools use football to attract students, raise school spirit and bring back alumni and their money. In many states the highest paid public employee is the football coach. Notre Dame attracted a star professor by promising him season tickets "between the 45s" and Nebraska smartly has an admissions office inside the stadium.

    Many foreigners find the level of US college athletics surprising but having grown up in this country I was shocked to find out European universities, for the most part, do not play each other in any sport, not even soccer. Where's the fun in that?

    My next university trip will be to the University of Rochester, not a football powerhouse and in the same Division III wannabe-ivy league as the University of Chicago. Chicago used to be a football powerhouse, part of the Big Ten and had the first Heisman trophy winner, Jay Berwanger, in 1935. But then the new president Robert Maynard Hutchins who has been claimed to say "Whenever I feel like exercising, I lie down until that feeling goes away," eliminated the athletic programs and focused the university on academics. Only in the past few decades have they even had Division III teams.

    With all this traveling, I won't be going to FOCS. But don't worry, I have lined up a special guest blogger to bring us all the gossip from the conference.

    Tuesday, October 18, 2005

    Finding Nash has Same Complexity as Finding Fixed Points

    In a new paper, Daskalakis, Goldberg and Papadimitriou show that finding Nash Equilibrium in matrix games with four or more players is complete for the search class PPAD. PPAD is best described by the problem: Given an exponential-size direced graph with every node having in-degre and out-degree at most one described by a polynomial-time computable function f(v) that outputs the predecessor and successor of v, and a vertex s with a successor but no predecessors, find a t≠s that either has no successors or predecessors. The underlying combinatorial statement that such a t exists is used to prove Sperner's Lemma which can be used to prove the Brouwer fixed point theorem which in turn can be used to prove the existence of Nash Equilibrium.

    The authors leave open the complexity of finding Nash Equilibrium for two and three players. They conjecture that for three players the problem remains complete for PPAD but two player Nash Equilibriums can be found in polynomial time.

    Monday, October 17, 2005

    True Impact

    How do you measure your impact as a computer scientist? You can try measures like the Citeseer rank or the h-index, but the only scientifically valid test would compare the world today with the world where you were never born.

    We can never actually run such a test but we can try the thought experiment. Even if you are one of the "greats," most of your theorems, even the best and most surprising, would have been eventually proved a few months or a few years later. Other theorems would never have been proved because no one, other than the non-existent you, would have cared. Other than speeding up science a little bit, you cannot get a long-term individual impact on the field solely by proving theorems.

    But proving those theorems builds your reputation and with that reputation you can shape the direction of the field. With this reputation you can, for better or for worse, help shape the direction of the field and set the research agenda for a generation of young graduate students. You also have lasting influence through your graduate students and the undergrads you convince to study computer science.

    We can run this thought experiment the other way. Suppose many years ago a sperm darted right instead of left and fertilized an egg that hatched a true genius in our field. How much difference could that one person have made on our research and our lives?

    Sunday, October 16, 2005

    Blogging and Academics

    The University of Chicago denying tenure to an assistant professor is rarely a breaking news story. Yet political scientist Daniel Drezner's case received considerable press including a Chicago Tribune story. Why? Because he had a popular blog.

    I doubt the content of the weblog or its existence or popularity played negatively towards his tenure case. Perhaps some feel his time would have been better spent on "real academics" but most likely they considered his more traditional academic writings and, frankly, it's very difficult to get tenure at the U of C, particularly in the social sciences.

    Will Drezner's weblog help him in his future job hunt? Ivan Tribble argued that weblogs can hurt a candidate for an academic position.

    The content of the blog may be less worrisome than the fact of the blog itself. Several committee members expressed concern that a blogger who joined our staff might air departmental dirty laundry (real or imagined) on the cyber clothesline for the world to see. Past good behavior is no guarantee against future lapses of professional decorum.
    I disagree with Tribble. Most non-anonymous academic webloggers know better than to discuss departmental politics in their blogs and departmental hiring committees should or will realize they have nothing to fear. A popular weblog raises one visibility in and out of their field—far more people read this weblog then download my research papers, for example. A weblog like Daniel Drezner's (much more read than this one) gives him an edge over his peers, a popularity that will open some doors that others will have to fight harder for.

    Thursday, October 13, 2005


    Fonts are the last thing I want to worry about when I write a research paper. Unfortunately fonts have often become the last thing I need to worry about when I write a research paper.

    In the olden days (circa 1990), we all wrote our LaTeX papers using the Computer Modern font. When we sent a paper to a proceedings we printed up a clean copy and sent it via Federal Express.

    Now we have choices of fonts. Fonts are a surprisingly complicated process. A good font is a work of art and a scalable font is actually a computer program for each letter. If you intellectual property issues for digital music is complicated, IP for typefaces is nearly impossible to implement well.

    When some societies like the IEEE first started taking electronic uploads for their proceedings we would get the occasional disastrous effects because the IEEE fonts didn't match the fonts people used to create a paper. For example the "<" would appear as a "⇒" making some of the papers unreadable. Most of these organizations have become more aware of this issue but now require us to jump through some hoops (use the right fonts and style files and putting the paper in the appropriate format using the right program to do so). Makes me wish for the old days when I could send a paper and they would scan it, which the IEEE will still do but charge extra for.

    Sometimes you'll see "¿From" in older papers. This is not a font problem but a property of sending text files through email would add a ">" to a line beginning with "From" which would come out "¿From" after LaTeX processed the file. You see it less now as files get sent via attachments instead of directly in the mail body.

    Distractions from worry about something as minor as fonts really keeps us away from focusing on research and other important activities. Remember, no one was ever denied tenure for bad font selection.

    Wednesday, October 12, 2005

    Early or Late

    As you can see from the timestamp of this post, I came into work quite early this morning. I had to drive and needed an early start (about 6 AM) to beat Chicago traffic and get a good parking spot. When I arrived I saw another professor in his office. I knew he wasn't the early type and likely spent the night here. When I said "Hello," he replied "Deadline."

    Which one of us is keeping the crazier hours?

    Monday, October 10, 2005

    Favorite Theorems: Logical Characterization of NP

    September Edition

    Usually we think of the class NP as either languages accepted in polynomial-time by a nondeterministic Turing machine or languages with polynomial-time verifiable witnesses. Ronald Fagin gives a characterization of NP based on logic without references to Turing machines or polynomial time.

    Ronald Fagin, Generalized first-order spectra and polynomial-time recognizable sets. Complexity of Computation, ed. R. Karp, SIAM-AMS Proceedings 7, 1974, pp. 43-73.

    In this paper Fagin shows that NP consists of exactly the languages expressible with existential second-order formulas. For example consider a graph G described by an edge relation E(i,j) and we can define whether G is k-colorable by

    ∃C ∀i,j (1≤C(i)≤k ∧ (E(i,j)→C(i)≠C(j)))

    With some more work you can use binary predicates. In general every language in NP has an existential second-order characterization with binary predicates and a universal first-order part.

    Stockmeyer generalizes Fagin's result to characterize the polynomial-time hierarchy with second-order formulas.

    Fagin's result started the area of descriptive complexity that characterized many common complexity classes in various logics and has connections to the complexity of database queries. Neil Immerman's work in descriptive complexity led him to his proof that nondeterministic space is closed under complement. Robert Szelecpsényi independently came up with a similar proof through a different approach.

    Papadimitriou and Yannakakis use Fagin's result to characterize the class MAX-SNP of optimization problems. One of the first corollaries of the PCP Theorem is to show the MAX-SNP hard problems cannot be approximated within an arbitrary constant unless P=NP. In fact the concept of probabilistically checkable proof itself originally comes from a second-order view of NP that originated from Fagin's paper.

    Update 10/12: Fagin adds an addendum.

    Thanks to Lance and Siva for the kind words about my theorem. Let me clarify the story on the arity of the existentially quantified relations.

    An existential second-order formula about, say, graphs, is a formula of the form

    ∃ Q1 ... ∃ Qk S( E, Q1, ..., Qk)

    where E represents the edge relation, Q1, ..., Qk are existentially quantified predicates of arbitrary arity, and S(E, Q1, ..., Qk) is a first-order formula that involves only E, Q1, ..., Qk. As an example, 3-colorability of the graph can be expressed by an existential second-order formula

    ∃ Q1 ∃ Q2 ∃ Q3 S(E, Q1, Q2, Q3),

    where Q1, Q2, and Q3 are unary predicates (that represent the 3 colors), and S(E, Q1, Q2, Q3) is a first-order formula that says "Each point has exactly one color, and no two points with the same color are connected by an edge''.

    In the case of graphs, my theorem says that if T is a class of graphs that is closed under isomorphism (that is, whenever T contains a graph G, then it contains every graph isomorphic to G) , then T is in NP if and only if T is defined by an existential second-order formula. In the case of graphs, it is an open problem as to whether we really need to allow existentially quantified predicates of arbitrary arity. On the one hand, it is conceivable that there are NP properties of graphs that require existentially quantified predicates of arbitrarily large arity. On the other hand, it is conceivable that we can capture every NP property of graphs by allowing only a single existentially quantified binary predicate.

    If we consider NP properties not just of graphs, but of arbitrary structures (such as structures with, say, two ternary relations and five 7-ary relations), then the characterization of NP in my theorem continues to hold, but in this case, it is known that existentially quantified binary predicates do not suffice. In particular, Ajtai proved (in the same amazingly rich 1983 paper [Σ11-formulae on finite structures, Annals of Pure and Applied Logic 24, 1983, pp. 1-48] where, among other things, he proved the Furst-Saxe-Sipser theorem independently of Furst, Saxe and Sipser), that if we consider structures consisting of a single m-ary relation, then the statement "The number of m-tuples in the relation is even" cannot be captured in existential second-order logic with any number of existentially quantified predicates of arity less than m.

    A gentle introduction to the work I've mentioned in this note and to some other work in finite model theory appears in my survey paper Finite model theory-a personal perspective.

    Sunday, October 09, 2005

    A New Packard Fellow

    Piotr Indyk, himself a 2003 Packard Fellow, writes
    On the recent blog topic of awards: you might be interested to know that Venkat Guruswami just received a Packard Fellowship.
    Congrats to Venkat for recognition (and money) well deserved.

    Saturday, October 08, 2005

    NP-Completeness Papers

    A colleague is refereeing a paper in a non-computer science area that shows a certain computational problem is NP-complete. The proof uses a simple reduction from a standard NP-complete problem. Should such a result be published? As long as people really care about the computational problem, then yes, such results should be published.

    The greatest gift of computational complexity to society is the ability to use NP-completeness to show that a large variety of problems are likely hard. The field has also developed many tools to help easily show such problems are NP-complete. We shouldn't penalize people for using these tools.

    This is one of those cases where having a "simple proof" hurts the paper. If the paper used PCP technology in the proof it would without question be published. And if the paper relied on the unique games conjecture (and thus a weaker result) the paper would likely get accepted into STOC.

    Thursday, October 06, 2005

    Unix Free Since 1999

    My first computer was a TRS-80, my second an Apple IIe. In college I mostly programmed in IBM 370 assembly code. But in graduate school (first at Berkeley and then at MIT) I starting using Unix in its various forms and its programs, first Vi and Troff, then Emacs and LaTex and reading email via the command line "mail."

    My future wife had one of the early "IBM Compatible" PCs and I liked some of the programs one could use, like Quicken, Prodigy (an information dial-up service), good spreadsheets and word processing. My home computer has always been a DOS/Windows machine since.

    Windows had good calendar and email programs long before they were available for Unix so at one point I got a PC card for the Sun in my office which ran Microsoft Windows in an Unix window. As I found myself spending more and more time in that window, my next machine was a Windows machine with an X-Windows program so I could connect to the department's Unix machines to use Emacs and LaTex.

    Soon very good Emacs and LaTex programs became available for Windows and when I moved to NEC in 1999 I went Unix free and haven't looked back. My biggest complaint about Unix was the user interface. To print pages 3 to 5 of a latex document is easy in windows, for Unix I had to do a man dvips since I could never keep straight which flags did what. Once I spent hours trying to figure out what I did wrong in a Make program (I had uses spaces instead of tabs). I'll never forget the time I accidentally typed "rm temp *" instead of "rm temp*".

    Ever since people have kept telling me Linux interfaces and programs have gotten much better, and they have, but never enough to get me to switch back. Some Apple lovers have tried to get me to move to Apples, but they just never had the software available that PCs do. Windows emulators for Apples are popular but you don't see the need for the other direction.

    As more and more of the programs I use become web based, the actual platform becomes less and less important. Still though as someone who likes an easy user interface and wide availability of programs and doesn't do much programming and scripting, Windows has worked well for me.

    Wednesday, October 05, 2005


    A conversation with a graduate student today.
    • Student: Why doesn't FOCS have a best paper award this year?
    • Me: They often don't announce the winners until the conference begins.
    • Student: Traditionally the best paper get longer talks and there are none scheduled.
    • Me: There's no such thing as tradition at STOC and FOCS. Rest assured (though I have no prior knowledge) there will be a best paper award given.
    I've heard that by tradition FOCS has had single sessions and STOC has had parallel sessions. Not long ago both had parallel sessions and before that FOCS had the parallel sessions (the first time with a two volume proceedings) and STOC had single sessions, and before that both had single sessions. STOC/FOCS used to be Monday-Wednesday, now they are Saturday-Tuesday and sometimes not. STOC/FOCS were always in North American and then they weren't. FOCS once had the only best student paper award and then had the only best student paper award named after somebody. Now they both do. We've had invited speakers. We've not had invited speakers. We've had tutorials. We've not had tutorials. One day one of the conferences will have electronic proceedings and the other won't and that will be tradition.

    The decisions of how STOC and FOCS are run are up to the program committee with constraints on schedule and the local arrangements site. Two or three years of the same in a row becomes a "tradition" and hard to change. But we shouldn't let these "traditions" prevent us from running the best possible conferences and nor should you count on them to continue as is. Traditions keep us in the past, change pushes us to the future.

    Tuesday, October 04, 2005

    DNA Testing for Sports

    In the second biggest sports story in Chicago (after the White Sox rout of Boston), the Chicago Bulls basketball team traded Eddy Curry to the New York Knicks. The Bulls had wanted Curry to take a DNA test to check for a certain heart condition and Curry refused so finally they decided to trade him to the Knicks who will not require the DNA test. I understand the Bulls' position but testing DNA for diseases for employment opens up a big can of worms. Shades of Gattaca?

    On a completely different note, one-time guest blogger Scott Aaronson has joined the blogosphere himself. What took him so long?

    Monday, October 03, 2005

    Euthanizing a Virtual Pet

    We had a high pitch tone coming from somewhere in our family room but we couldn't find the source. I shut down the power in case the sound was coming from the stereo system or lights but the tone remained. Finally we found the culprit, my daughter's Tamagotchi buried under some books.

    Pressing the buttons failed to quiet the device, so I attempted to open the unit up to remove the battery. When that failed I finally put the Tamagotchi in a plastic bag and hit it several times with a hammer until it finally shut up.

    Later I confessed the destruction of the virtual pet to my daughter who seemed not to care in the least. The fad had ended. But next on my daughters' wish list: Nintendogs.

    Sunday, October 02, 2005


    The Nobel Prizes will be announced this week and I predict that no one will win the Nobel Prize in computer science for the 105th consecutive year. Computer Science's highest honor, the Turing Award will be announced in early 2006 (February 16 last year).

    We also have some awards coming up for theorists. At every third STOC/FOCS conference, the Knuth Prize is given for outstanding sustained contributions to theoretical computer science. We will find out the next winner at the upcoming FOCS Conference in a couple of weeks.

    Every four years the International Math Union awards the Nevanlinna Prize for contributions in "mathematical aspects of information sciences" to a scientist under forty. The previous winners have all been computer science theorists and we have several excellent candidates for the 2006 prize as well.

    ACM SIGACT sponsors or co-sponsors several other awards such as the Gödel Prize given to the best recent journal paper (where the definition of "recent" keeps changing) and the Paris Kanellakis Theory and Practice Award given to theorists whose work had practical applications.