Paper of the year goes to A Discrete and Bounded Envy-Free Cake Cutting Protocol for Any Number of Agents by Haris Aziz and Simon Mackenzie. Might not seem like a traditional complexity result but cake cutting is a computational process with a desired set of properties and this papers settles a long standing open question. Bill posted about the paper back in June.
Some other nice papers from 2016: Hadamard not so rigid after all and not usable for Valiant's lower-bound program, 2-source extractors from low-entropy sources which get near perfect randomness from two weak sources, query complexity separations, deterministic, randomized and quantum, in two wonderful constructions beautifully presented at STOC, space-efficient subset-sum,and learning algorithms from natural proofs.
2016 will go down as a watershed year for machine learning with AlphaGo, huge advances in translation, self-driving cars and with tools like TensorFlow out in the open we'll truly see learning everywhere. Prediction didn't have such a great year with bad calls on Trump's nomination, Brexit and Trump's election. Machines learning humans still has a way to go.
We thank our guest posters Molly Fortnow, MohammadTaghi HajiAghayi, Samir Khuller and Avi Wigderson.
We remember Erich Bloch, Hartmut Ehrig, Carrie Fisher, Rūsiņš Freivalds, John Glenn, David Johnson, Marvin Minsky, Peter Naur, Seymour Papert, Hillary Putnam, Lloyd Shapley and Boris Trakhtenbrot.
As the surprises of 2016 lead us into an uncertain 2017 let me leave with my usual advice: When in a complex world, best to keep it simple.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Thursday, December 29, 2016
Monday, December 26, 2016
Hidden Figures
Yesterday in our tradition of seeing math movies on Christmas, we saw Hidden Figures, the story of African-American women who worked as "computers" at NASA in 1961 Virginia in the last vestiges of "separate but equal". The movie focuses on three women, Katherine Johnson, Dorothy Vaughn and Mary Jackson as they dealt with and crossed racial and gender boundaries. But this isn't just a movie you should see, rather a movie you will enjoy watching and I highly recommend it when it goes into a wide release in the US on January 6. Some minor spoilers ahead.
Beyond the struggle, Hidden Figures does work as a math movie. The major storyline follows the group of mathematicians who compute the trajectories of the Mercury flights with Katherine Johnson playing a main role in figuring out how to pull John Glenn out of an elliptical orbit into a parabolic safe entry back to Earth.
The movie also serves up lessons about computers. The new IBM mainframe comes and threatens the need for human computers, women (here segregated into two groups) who did the tedious manual calculations that NASA relied on. In the movie Dorothy Vaughn recognizes the threat and retrains her team to learn Fortran, perhaps a parable for how technology changes jobs today.
The majority of the audience for the sold-out theater were African-American women and they laughed and cheered as the heroines of the movies succeeded. While we have made much progress in the last 50 years we still have far to go.
The movie also serves up lessons about computers. The new IBM mainframe comes and threatens the need for human computers, women (here segregated into two groups) who did the tedious manual calculations that NASA relied on. In the movie Dorothy Vaughn recognizes the threat and retrains her team to learn Fortran, perhaps a parable for how technology changes jobs today.
The majority of the audience for the sold-out theater were African-American women and they laughed and cheered as the heroines of the movies succeeded. While we have made much progress in the last 50 years we still have far to go.
Thursday, December 22, 2016
Moneyball for Academics
MIT and Tel Aviv business school professors Dimitris Bertsimas, Erik Brynjolfsson, Shachar Reichman and John Silberholz wrote an intriguing paper Tenure Analytics: Models for Predicting Research Impact about using metrics in tenure decisions, Moneyball for academics. The paper is behind a firewall but here is a preprint (with the Moneyball title), an essay by some of the authors, and an article from Inside Higher Ed.
Brynjolfsson works extensively on metric-based decision making and gave early warnings on the loss of jobs to ML. I still have the HiPPO (Highest Paid Person's Opinion) from his EC 2010 invited talk. I like it for the complexity class on its back.
In this article, Brynjolfsson and his co-authors use properties of the co-author and citation network graphs to predict future paper-citation based metrics, like the h-index for professors at top ranked OR departments. The paper claims that their metrics do a better prediction job than tenure committees.
The IHE article captures an alternative view:
I'm not against using metrics to help with tenure and hiring decisions but I still put most of the weight on the letters.
The article does talk about predicting INFORMS fellows from the network centrality model, predicting a subjecting decision from objective statistics, though doesn't compare that to tenure decisions. I wonder how well one can predict ACM Fellows as well. Another challenge here: As CS is a constantly changing field, can one use an algorithm that predicts today's fellows from 20 year old data to predict future fellows from today's data?
Brynjolfsson works extensively on metric-based decision making and gave early warnings on the loss of jobs to ML. I still have the HiPPO (Highest Paid Person's Opinion) from his EC 2010 invited talk. I like it for the complexity class on its back.
In this article, Brynjolfsson and his co-authors use properties of the co-author and citation network graphs to predict future paper-citation based metrics, like the h-index for professors at top ranked OR departments. The paper claims that their metrics do a better prediction job than tenure committees.
The IHE article captures an alternative view:
Many professors oppose the use of bibliometrics in hiring, tenure and promotion decisions, saying that scholarly potential can’t be captured in a formula most often applied to pursuits with a bottom line, like winning games or playing the stock market. Such a system inevitably will be “gamed" by academics, critics say, and time-consuming but ground-breaking research will be sidelined in favor of “sure things” in terms of publishing -- the academic equivalent of clickbait.I have a different issue--the use of bibliometrics to judge the current value of a professor. In baseball, the original Moneyball, there are measurable goals: runs, wins, championships. So you can use data analytics to both measure the current and future potential of a player. In academics, what makes a great professor? I hope the sole answer isn't the h-index. And without a subjective method to measure a successful professor, how do you train a model to predict for it?
I'm not against using metrics to help with tenure and hiring decisions but I still put most of the weight on the letters.
The article does talk about predicting INFORMS fellows from the network centrality model, predicting a subjecting decision from objective statistics, though doesn't compare that to tenure decisions. I wonder how well one can predict ACM Fellows as well. Another challenge here: As CS is a constantly changing field, can one use an algorithm that predicts today's fellows from 20 year old data to predict future fellows from today's data?
Monday, December 19, 2016
The very first Ramseyian Theorem
Many years ago I noticed that in several books on Ramsey Theory mention that Hilbert proved the first Ramseyian theorem. The theorem is the Hilbert Cube Lemma (HCL) which in modern language is:
DEFINITION: Let x, y1, y2,...,yn be natural numbers. Then the n-cube on x, y1, y2, ..., yn is
{ x+ e1*y1 + ... + en*yn : e1,...,en \in {0,1} }
HCL: for all c there exists H=H(m,c) such that for all c-colorings of {1,...,H} there exists a monochromatic cube.
Here are some quotes about this theorem from texts on Ramsey Theory:
Graham-Rothchild-Spencer's book Ramsey Theory: In the middle of Szemeredi's proof of a cube lemma with double exp bounds on H (Hilbert's proof gives tower-type bounds) they write:
Historical Note: In 1892 D. Hilbert proved that, for any k\ge 1, if N (the naturals) is
finitely colored then there exists in one color infinitely many translates of a k-cube.
THATS IT! No mention of WHY Hilbert proved this. According to the index this is the only mention of Hilbert in the book.
From Landman and Robertson Ramsey Theory over the Integers:
The results that are generally accepted to be the earliest Ramsey-type theorems are due,
in chronological order, to Hilbert, Schur, and van der Warden.
Later in the exercises he asks the reader to prove HCL from VDW's theorem.
THATS IT! No mention of WHY Hilbert proved this According to the index this is the only
mention of Hilbert in the book.
Andrew Soifer's The Mathematical Coloring Book. This is a very scholarly work about the history of coloring theorems. (For my review see here .) I expected to get MORE on why Hilbert did. Soifer does devote two pages to Hilbert. But as for WHY Hilbert did it all he says is:
As far as we know today, the first Ramseyian-type results appeared in 1892 as a little noticed
assertion in [Hil]. Its author was the great David Hilbert. In this work Hilbert proved the theorem of our interest merely as a tool for his study of irreducibility of rational functions.
I wanted to know what Hilbert proved (this was easy- I looked up the Hilbert Irreducibility
theorem) and how he used his Cube Lemma to do it. I assumed this would be known and out there
in English since the Hilbert Irreducibility Lemma is very important.
But NO- I found the original German Version but THATS IT.
Well, if you want something done, best to do it yourself. Except that I don't know German.
YADDA YADDA YADDA
Here is a paper with Mark Villarino and Ken Regan, in English, that has the proof.
In some later post I'll describe how the paper came about.
For now I will state the Hilbert Irred Theorem, two-variable case:
If f(x,t) is in Z[x,t] and is irreducible then for infinitely many natural numbers t, f(x,t) is irreducible.
DEFINITION: Let x, y1, y2,...,yn be natural numbers. Then the n-cube on x, y1, y2, ..., yn is
{ x+ e1*y1 + ... + en*yn : e1,...,en \in {0,1} }
HCL: for all c there exists H=H(m,c) such that for all c-colorings of {1,...,H} there exists a monochromatic cube.
Here are some quotes about this theorem from texts on Ramsey Theory:
Graham-Rothchild-Spencer's book Ramsey Theory: In the middle of Szemeredi's proof of a cube lemma with double exp bounds on H (Hilbert's proof gives tower-type bounds) they write:
Historical Note: In 1892 D. Hilbert proved that, for any k\ge 1, if N (the naturals) is
finitely colored then there exists in one color infinitely many translates of a k-cube.
THATS IT! No mention of WHY Hilbert proved this. According to the index this is the only mention of Hilbert in the book.
From Landman and Robertson Ramsey Theory over the Integers:
The results that are generally accepted to be the earliest Ramsey-type theorems are due,
in chronological order, to Hilbert, Schur, and van der Warden.
Later in the exercises he asks the reader to prove HCL from VDW's theorem.
THATS IT! No mention of WHY Hilbert proved this According to the index this is the only
mention of Hilbert in the book.
Andrew Soifer's The Mathematical Coloring Book. This is a very scholarly work about the history of coloring theorems. (For my review see here .) I expected to get MORE on why Hilbert did. Soifer does devote two pages to Hilbert. But as for WHY Hilbert did it all he says is:
As far as we know today, the first Ramseyian-type results appeared in 1892 as a little noticed
assertion in [Hil]. Its author was the great David Hilbert. In this work Hilbert proved the theorem of our interest merely as a tool for his study of irreducibility of rational functions.
I wanted to know what Hilbert proved (this was easy- I looked up the Hilbert Irreducibility
theorem) and how he used his Cube Lemma to do it. I assumed this would be known and out there
in English since the Hilbert Irreducibility Lemma is very important.
But NO- I found the original German Version but THATS IT.
Well, if you want something done, best to do it yourself. Except that I don't know German.
YADDA YADDA YADDA
Here is a paper with Mark Villarino and Ken Regan, in English, that has the proof.
In some later post I'll describe how the paper came about.
For now I will state the Hilbert Irred Theorem, two-variable case:
If f(x,t) is in Z[x,t] and is irreducible then for infinitely many natural numbers t, f(x,t) is irreducible.
Friday, December 16, 2016
Freedom of Speech
Bill and I have always strongly believed in the principle of freedom of speech, guaranteed to us by the constitution of the United States. The government block speech only in extreme circumstances, fraud, libel, threats to people and property, but allows people's opinions, no matter how abhorrent we find them.
Speech used to be tempered by complexity. You could only talk to a small number of people at once. You had to physically print and distribute materials spouting your points of view. So while people could say what they wanted, they had some barriers to distribute that information. We believed in free speech but speech wasn't free.
Now with the Internet, particularly through social media, speech flows quickly and cheaply. Excitedly people could express their views easily. People also discovered that others could do so as well.
We should welcome this diversity of opinion, never available at this level before. We should listen to what people have to say, challenge their views and even their claimed facts, but more importantly challenge ourselves and our own viewpoints with the arguments of others. Never trust anything you see on the Internet but never be afraid of it either.
Instead we see calls protect people from ideas that fall significantly outside the mainstream and decide for them which facts are true or false, whether by blocking accounts or making it more difficult to distribute information. Free speech may have helped elect a president whose views and actions they find abhorrent but that's not a reason to restrict the speech. One must fight words with words, not block the words of others.
We must always fight for the rights of people to express themselves and avoid the path of limiting the spread of ideas. Once we start restricting the distribution of ideas we take a walk down a tough path, and someday you may find you'll have to keep your own thoughts to yourself.
Monday, December 12, 2016
Guest post by Samir Khuller on Humans, Machines, and the Future of Work (a workshop)
Guest Post from Samir Khuller on
Humans, Machines, and the Future of Work
(A Workshop)
On Dec 5th and 6th I attended, at Rice University,
a workshop on Humans, Machines, and the Future of Work.
I had a wonderful lunch with John Markoff of NY Times,
and Moshe Vardi of Rice University (Moshe is the brains
behind this workshop).
I have rarely ever attended such meetings, that are so different
from the usual conferences I attend where the talks are all
rather technical and half the audience is lost half way through the talk.
The main theme poses the speculative question about the evolving
nature of work and how technology, and especially recent
advances in AI and Robotics (Self driving cars, Robots to look
after the elderly, self checkout machines that can simply
recognize your shopping order) can render huge categories of
jobs redundant. Why hire a human, when a robot will cook
dinner, and clean your home every day? What will people spend their time doing?
Humans, Machines, and the Future of Work
(A Workshop)
On Dec 5th and 6th I attended, at Rice University,
a workshop on Humans, Machines, and the Future of Work.
I had a wonderful lunch with John Markoff of NY Times,
and Moshe Vardi of Rice University (Moshe is the brains
behind this workshop).
I have rarely ever attended such meetings, that are so different
from the usual conferences I attend where the talks are all
rather technical and half the audience is lost half way through the talk.
The main theme poses the speculative question about the evolving
nature of work and how technology, and especially recent
advances in AI and Robotics (Self driving cars, Robots to look
after the elderly, self checkout machines that can simply
recognize your shopping order) can render huge categories of
jobs redundant. Why hire a human, when a robot will cook
dinner, and clean your home every day? What will people spend their time doing?
Will the work week reduce greatly? Yet, for many, their devices and
24/7 connectivity means we are working more than ever! The
speakers had varied backgrounds, from Economics to Social Science
to Roboticists.
In the end, its clear that the nature of work is constantly evolving.
Our children could have job titles, our parents could never dream of
when they were selecting a profession. However, the rapidly growing
interest in Tech and Computing is clear - these are powerful societal
forces at work. New job titles - Robot fixer, Robot Programmer,
Virtual Reality Expert, Lawyer for the protection of Robots. Demand
for graduates with those skills will continue to increase, and the
Universities that invest quickly and early will become the dominant
players. CMU, Georgia Tech and others invested very early in colleges
of computing, and this has paid of handsomely for them. The demand for
CS degrees is going to increase even more in the next decade
with research moving more closely to the impact that technology is
having on society. We as researchers, need to pay attention to
societal impact, since its very real and immediate.
The workshop speakers were pretty amazing - talks were accessible,
and extremely well delivered. I suppose such talks meant for a really
broad audience (there actually were not many CS people in the audience)
and thus the points being made are pretty high level.
I would recommend that we all become better acquainted with some of
the social issues that interact with Computer Science, and going
to workshops like this is a great start.
Thursday, December 08, 2016
Fixing the Academic Job Market
Last month I posted about the craziness of the computer science academic job market due mainly to the decentralized nature of our field. Here are some ideas of what we can do better. I've stolen some of these ideas from other fields such as math and economics.
Single Job Market System
A single website that every CS department uses to advertise positions and accept applications. Easy for applicants to apply to multiple places and, if they wish, make their materials open for any department to see. Recommenders need only upload their letter once. We could add some additional controls, methods for applicants to indicate say geographical preferences or two-body issues.
We could have an opt-in site for candidates to list where and when they have interviews. It would make coordination of job interview dates and offer timing that much simpler.
Annual Meeting
A meeting in early January that brings together all the subfields of computing so we can work as one community instead of twenty. As part of this meeting, members of recruiting committees and job candidates come and schedule short interviews with each other to make preliminary choices for on-campus interviews. CS hasn't had annual meetings since the 80's but math and econ still do.
Virtual Meetings
Every time I bring up annual meetings, people complain that we already have too many conferences in computer science. So if no physical meeting, we can set aside days to have a virtual meeting with recruiting committees and candidates talking over Skype.
Common Dates
Have some common fixed dates, just a couple of times in the spring, when departments can present offers, and when candidates must make a decision. That should reduce how long departments have to hold a position before it settles.
These last two ideas require no centralization, just willing job candidates.
Job Market Paper and Video
As the recent CRA Best Practices Memo suggests, candidates should choose a single paper to highlight their research. Each candidate should also post a short (3-5 minute) video where they describe this research at a level that any computer scientist could follow. The job talk should cover this paper only, instead of trying to wow with multiple works.
Candidate Web Page
If you are publicly looking for a job, set up a web page, linked from your home page, to your job materials: CV, research statement, teaching statement, list of references, pointers to all your papers with links to PDFs, with the aforementioned job market paper and video highlighted. Also give a pointer to your Google Scholar profile page and make sure that page is correct and up to date.
Single Job Market System
A single website that every CS department uses to advertise positions and accept applications. Easy for applicants to apply to multiple places and, if they wish, make their materials open for any department to see. Recommenders need only upload their letter once. We could add some additional controls, methods for applicants to indicate say geographical preferences or two-body issues.
We could have an opt-in site for candidates to list where and when they have interviews. It would make coordination of job interview dates and offer timing that much simpler.
Annual Meeting
A meeting in early January that brings together all the subfields of computing so we can work as one community instead of twenty. As part of this meeting, members of recruiting committees and job candidates come and schedule short interviews with each other to make preliminary choices for on-campus interviews. CS hasn't had annual meetings since the 80's but math and econ still do.
Virtual Meetings
Every time I bring up annual meetings, people complain that we already have too many conferences in computer science. So if no physical meeting, we can set aside days to have a virtual meeting with recruiting committees and candidates talking over Skype.
Common Dates
Have some common fixed dates, just a couple of times in the spring, when departments can present offers, and when candidates must make a decision. That should reduce how long departments have to hold a position before it settles.
These last two ideas require no centralization, just willing job candidates.
Job Market Paper and Video
As the recent CRA Best Practices Memo suggests, candidates should choose a single paper to highlight their research. Each candidate should also post a short (3-5 minute) video where they describe this research at a level that any computer scientist could follow. The job talk should cover this paper only, instead of trying to wow with multiple works.
Candidate Web Page
If you are publicly looking for a job, set up a web page, linked from your home page, to your job materials: CV, research statement, teaching statement, list of references, pointers to all your papers with links to PDFs, with the aforementioned job market paper and video highlighted. Also give a pointer to your Google Scholar profile page and make sure that page is correct and up to date.
Tuesday, December 06, 2016
A students unusual proof might be a better proof
I asked a student to show that between any two rationals is a rational.
She did the following: if x < y are rational then take δ << y-x and rational and use x+δ
This is correct though more complicated then what I had in mind: (x+y)/2
I then asked her to prove that between two irrationals is an irrational.
She did the following: if x < y are irrational then take δ << y-x and rational and use x+δ
1) Which proof for rationals is better:
The (x+y)/2 proof is simple, but the delta-proof also works for irrationals.
Which proof do you prefer? Why? What is your criteria?
Friday, December 02, 2016
Guest post by Samir Khuller about Visiting the Simons theory Inst for Computing
Guest post by Samir Khuller on his visit to the Simons Inst. of Computing
Visiting the Simons Institute for Theory of Computing
A few days back I had the good fortune to spend four days at the
Simons Theory inst. at UC Berkeley (for a workshop titled
Learning, Algorihtm Design, and Beyond Worst Case Analysis,)
Not only was the workshop extremely well run, the entire Institute is
amazing - from wide open spaces for collaboration to an excellent staff
and amenities for visitors (especially long term visitors who have
shared office space). We were missing a Dagstuhl like venue in the US
for a long time, and I think the Simons Theory Center partially makes
up for this deficiency. Dagstuhl is of course a bit isolated, so people
have no-where to go but interact, and work, and that is part
of the charm. Berkeley isn’t isolated and has some of the most amazing
cafes and restaurants right next to its charming campus. I ended up staying
on campus at their Faculty Club.
Simons workshops are over-subscribed and in high demand, but even on
Thursday evening (my last day there) I noticed that scores of badges
had not yet been picked up. Part of the problem lies in the fact that
registration is free. Perhaps charging $10/day for registration will
Visiting the Simons Institute for Theory of Computing
A few days back I had the good fortune to spend four days at the
Simons Theory inst. at UC Berkeley (for a workshop titled
Learning, Algorihtm Design, and Beyond Worst Case Analysis,)
Not only was the workshop extremely well run, the entire Institute is
amazing - from wide open spaces for collaboration to an excellent staff
and amenities for visitors (especially long term visitors who have
shared office space). We were missing a Dagstuhl like venue in the US
for a long time, and I think the Simons Theory Center partially makes
up for this deficiency. Dagstuhl is of course a bit isolated, so people
have no-where to go but interact, and work, and that is part
of the charm. Berkeley isn’t isolated and has some of the most amazing
cafes and restaurants right next to its charming campus. I ended up staying
on campus at their Faculty Club.
Simons workshops are over-subscribed and in high demand, but even on
Thursday evening (my last day there) I noticed that scores of badges
had not yet been picked up. Part of the problem lies in the fact that
registration is free. Perhaps charging $10/day for registration will
ensure that people sign up for the days that they intend to be there,
so others who want to attend are not locked out (to be fair, I don’t
know how many were locked out, but I have the impression that due to
high demand, workshops have closed registration in the past).
Kudos to both the Simons Institute organizers as well as the ones
for the special semester and workshop! I did visit Simons once in 2014,
for just 15 minutes when we were visiting places to see for the design
of our very own Brendan Iribe Center for Computer Science and Innovation.
The construction of which launched in summer 2016, and we hope to be done
by August 2018. Come and visit after that! This project has consumed the
last 30 months of my life.
One of the main benefits of attending these workshops is to see sub-fields
as they form - long before new courses emerge covering these topics - and
it’s inspiring to meet the leaders in our field running the
program on Algorithms and Uncertainty. Finally, if you missed it,
videos of talks are available online here.