Paper of the year goes to A Discrete and Bounded Envy-Free Cake Cutting Protocol for Any Number of Agents by Haris Aziz and Simon Mackenzie. Might not seem like a traditional complexity result but cake cutting is a computational process with a desired set of properties and this papers settles a long standing open question. Bill posted about the paper back in June.
Some other nice papers from 2016: Hadamard not so rigid after all and not usable for Valiant's lower-bound program, 2-source extractors from low-entropy sources which get near perfect randomness from two weak sources, query complexity separations, deterministic, randomized and quantum, in two wonderful constructions beautifully presented at STOC, space-efficient subset-sum,and learning algorithms from natural proofs.
2016 will go down as a watershed year for machine learning with AlphaGo, huge advances in translation, self-driving cars and with tools like TensorFlow out in the open we'll truly see learning everywhere. Prediction didn't have such a great year with bad calls on Trump's nomination, Brexit and Trump's election. Machines learning humans still has a way to go.
We thank our guest posters Molly Fortnow, MohammadTaghi HajiAghayi, Samir Khuller and Avi Wigderson.
We remember Erich Bloch, Hartmut Ehrig, Carrie Fisher, Rūsiņš Freivalds, John Glenn, David Johnson, Marvin Minsky, Peter Naur, Seymour Papert, Hillary Putnam, Lloyd Shapley and Boris Trakhtenbrot.
As the surprises of 2016 lead us into an uncertain 2017 let me leave with my usual advice: When in a complex world, best to keep it simple.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Thursday, December 29, 2016
Monday, December 26, 2016
Hidden Figures
Yesterday in our tradition of seeing math movies on Christmas, we saw Hidden Figures, the story of African-American women who worked as "computers" at NASA in 1961 Virginia in the last vestiges of "separate but equal". The movie focuses on three women, Katherine Johnson, Dorothy Vaughn and Mary Jackson as they dealt with and crossed racial and gender boundaries. But this isn't just a movie you should see, rather a movie you will enjoy watching and I highly recommend it when it goes into a wide release in the US on January 6. Some minor spoilers ahead.
Beyond the struggle, Hidden Figures does work as a math movie. The major storyline follows the group of mathematicians who compute the trajectories of the Mercury flights with Katherine Johnson playing a main role in figuring out how to pull John Glenn out of an elliptical orbit into a parabolic safe entry back to Earth.
The movie also serves up lessons about computers. The new IBM mainframe comes and threatens the need for human computers, women (here segregated into two groups) who did the tedious manual calculations that NASA relied on. In the movie Dorothy Vaughn recognizes the threat and retrains her team to learn Fortran, perhaps a parable for how technology changes jobs today.
The majority of the audience for the sold-out theater were African-American women and they laughed and cheered as the heroines of the movies succeeded. While we have made much progress in the last 50 years we still have far to go.
The movie also serves up lessons about computers. The new IBM mainframe comes and threatens the need for human computers, women (here segregated into two groups) who did the tedious manual calculations that NASA relied on. In the movie Dorothy Vaughn recognizes the threat and retrains her team to learn Fortran, perhaps a parable for how technology changes jobs today.
The majority of the audience for the sold-out theater were African-American women and they laughed and cheered as the heroines of the movies succeeded. While we have made much progress in the last 50 years we still have far to go.
Thursday, December 22, 2016
Moneyball for Academics
MIT and Tel Aviv business school professors Dimitris Bertsimas, Erik Brynjolfsson, Shachar Reichman and John Silberholz wrote an intriguing paper Tenure Analytics: Models for Predicting Research Impact about using metrics in tenure decisions, Moneyball for academics. The paper is behind a firewall but here is a preprint (with the Moneyball title), an essay by some of the authors, and an article from Inside Higher Ed.
Brynjolfsson works extensively on metric-based decision making and gave early warnings on the loss of jobs to ML. I still have the HiPPO (Highest Paid Person's Opinion) from his EC 2010 invited talk. I like it for the complexity class on its back.
In this article, Brynjolfsson and his co-authors use properties of the co-author and citation network graphs to predict future paper-citation based metrics, like the h-index for professors at top ranked OR departments. The paper claims that their metrics do a better prediction job than tenure committees.
The IHE article captures an alternative view:
I'm not against using metrics to help with tenure and hiring decisions but I still put most of the weight on the letters.
The article does talk about predicting INFORMS fellows from the network centrality model, predicting a subjecting decision from objective statistics, though doesn't compare that to tenure decisions. I wonder how well one can predict ACM Fellows as well. Another challenge here: As CS is a constantly changing field, can one use an algorithm that predicts today's fellows from 20 year old data to predict future fellows from today's data?
Brynjolfsson works extensively on metric-based decision making and gave early warnings on the loss of jobs to ML. I still have the HiPPO (Highest Paid Person's Opinion) from his EC 2010 invited talk. I like it for the complexity class on its back.
In this article, Brynjolfsson and his co-authors use properties of the co-author and citation network graphs to predict future paper-citation based metrics, like the h-index for professors at top ranked OR departments. The paper claims that their metrics do a better prediction job than tenure committees.
The IHE article captures an alternative view:
Many professors oppose the use of bibliometrics in hiring, tenure and promotion decisions, saying that scholarly potential can’t be captured in a formula most often applied to pursuits with a bottom line, like winning games or playing the stock market. Such a system inevitably will be “gamed" by academics, critics say, and time-consuming but ground-breaking research will be sidelined in favor of “sure things” in terms of publishing -- the academic equivalent of clickbait.I have a different issue--the use of bibliometrics to judge the current value of a professor. In baseball, the original Moneyball, there are measurable goals: runs, wins, championships. So you can use data analytics to both measure the current and future potential of a player. In academics, what makes a great professor? I hope the sole answer isn't the h-index. And without a subjective method to measure a successful professor, how do you train a model to predict for it?
I'm not against using metrics to help with tenure and hiring decisions but I still put most of the weight on the letters.
The article does talk about predicting INFORMS fellows from the network centrality model, predicting a subjecting decision from objective statistics, though doesn't compare that to tenure decisions. I wonder how well one can predict ACM Fellows as well. Another challenge here: As CS is a constantly changing field, can one use an algorithm that predicts today's fellows from 20 year old data to predict future fellows from today's data?
Monday, December 19, 2016
The very first Ramseyian Theorem
Many years ago I noticed that in several books on Ramsey Theory mention that Hilbert proved the first Ramseyian theorem. The theorem is the Hilbert Cube Lemma (HCL) which in modern language is:
DEFINITION: Let x, y1, y2,...,yn be natural numbers. Then the n-cube on x, y1, y2, ..., yn is
{ x+ e1*y1 + ... + en*yn : e1,...,en \in {0,1} }
HCL: for all c there exists H=H(m,c) such that for all c-colorings of {1,...,H} there exists a monochromatic cube.
Here are some quotes about this theorem from texts on Ramsey Theory:
Graham-Rothchild-Spencer's book Ramsey Theory: In the middle of Szemeredi's proof of a cube lemma with double exp bounds on H (Hilbert's proof gives tower-type bounds) they write:
Historical Note: In 1892 D. Hilbert proved that, for any k\ge 1, if N (the naturals) is
finitely colored then there exists in one color infinitely many translates of a k-cube.
THATS IT! No mention of WHY Hilbert proved this. According to the index this is the only mention of Hilbert in the book.
From Landman and Robertson Ramsey Theory over the Integers:
The results that are generally accepted to be the earliest Ramsey-type theorems are due,
in chronological order, to Hilbert, Schur, and van der Warden.
Later in the exercises he asks the reader to prove HCL from VDW's theorem.
THATS IT! No mention of WHY Hilbert proved this According to the index this is the only
mention of Hilbert in the book.
Andrew Soifer's The Mathematical Coloring Book. This is a very scholarly work about the history of coloring theorems. (For my review see here .) I expected to get MORE on why Hilbert did. Soifer does devote two pages to Hilbert. But as for WHY Hilbert did it all he says is:
As far as we know today, the first Ramseyian-type results appeared in 1892 as a little noticed
assertion in [Hil]. Its author was the great David Hilbert. In this work Hilbert proved the theorem of our interest merely as a tool for his study of irreducibility of rational functions.
I wanted to know what Hilbert proved (this was easy- I looked up the Hilbert Irreducibility
theorem) and how he used his Cube Lemma to do it. I assumed this would be known and out there
in English since the Hilbert Irreducibility Lemma is very important.
But NO- I found the original German Version but THATS IT.
Well, if you want something done, best to do it yourself. Except that I don't know German.
YADDA YADDA YADDA
Here is a paper with Mark Villarino and Ken Regan, in English, that has the proof.
In some later post I'll describe how the paper came about.
For now I will state the Hilbert Irred Theorem, two-variable case:
If f(x,t) is in Z[x,t] and is irreducible then for infinitely many natural numbers t, f(x,t) is irreducible.
DEFINITION: Let x, y1, y2,...,yn be natural numbers. Then the n-cube on x, y1, y2, ..., yn is
{ x+ e1*y1 + ... + en*yn : e1,...,en \in {0,1} }
HCL: for all c there exists H=H(m,c) such that for all c-colorings of {1,...,H} there exists a monochromatic cube.
Here are some quotes about this theorem from texts on Ramsey Theory:
Graham-Rothchild-Spencer's book Ramsey Theory: In the middle of Szemeredi's proof of a cube lemma with double exp bounds on H (Hilbert's proof gives tower-type bounds) they write:
Historical Note: In 1892 D. Hilbert proved that, for any k\ge 1, if N (the naturals) is
finitely colored then there exists in one color infinitely many translates of a k-cube.
THATS IT! No mention of WHY Hilbert proved this. According to the index this is the only mention of Hilbert in the book.
From Landman and Robertson Ramsey Theory over the Integers:
The results that are generally accepted to be the earliest Ramsey-type theorems are due,
in chronological order, to Hilbert, Schur, and van der Warden.
Later in the exercises he asks the reader to prove HCL from VDW's theorem.
THATS IT! No mention of WHY Hilbert proved this According to the index this is the only
mention of Hilbert in the book.
Andrew Soifer's The Mathematical Coloring Book. This is a very scholarly work about the history of coloring theorems. (For my review see here .) I expected to get MORE on why Hilbert did. Soifer does devote two pages to Hilbert. But as for WHY Hilbert did it all he says is:
As far as we know today, the first Ramseyian-type results appeared in 1892 as a little noticed
assertion in [Hil]. Its author was the great David Hilbert. In this work Hilbert proved the theorem of our interest merely as a tool for his study of irreducibility of rational functions.
I wanted to know what Hilbert proved (this was easy- I looked up the Hilbert Irreducibility
theorem) and how he used his Cube Lemma to do it. I assumed this would be known and out there
in English since the Hilbert Irreducibility Lemma is very important.
But NO- I found the original German Version but THATS IT.
Well, if you want something done, best to do it yourself. Except that I don't know German.
YADDA YADDA YADDA
Here is a paper with Mark Villarino and Ken Regan, in English, that has the proof.
In some later post I'll describe how the paper came about.
For now I will state the Hilbert Irred Theorem, two-variable case:
If f(x,t) is in Z[x,t] and is irreducible then for infinitely many natural numbers t, f(x,t) is irreducible.
Friday, December 16, 2016
Freedom of Speech
Bill and I have always strongly believed in the principle of freedom of speech, guaranteed to us by the constitution of the United States. The government block speech only in extreme circumstances, fraud, libel, threats to people and property, but allows people's opinions, no matter how abhorrent we find them.
Speech used to be tempered by complexity. You could only talk to a small number of people at once. You had to physically print and distribute materials spouting your points of view. So while people could say what they wanted, they had some barriers to distribute that information. We believed in free speech but speech wasn't free.
Now with the Internet, particularly through social media, speech flows quickly and cheaply. Excitedly people could express their views easily. People also discovered that others could do so as well.
We should welcome this diversity of opinion, never available at this level before. We should listen to what people have to say, challenge their views and even their claimed facts, but more importantly challenge ourselves and our own viewpoints with the arguments of others. Never trust anything you see on the Internet but never be afraid of it either.
Instead we see calls protect people from ideas that fall significantly outside the mainstream and decide for them which facts are true or false, whether by blocking accounts or making it more difficult to distribute information. Free speech may have helped elect a president whose views and actions they find abhorrent but that's not a reason to restrict the speech. One must fight words with words, not block the words of others.
We must always fight for the rights of people to express themselves and avoid the path of limiting the spread of ideas. Once we start restricting the distribution of ideas we take a walk down a tough path, and someday you may find you'll have to keep your own thoughts to yourself.
Monday, December 12, 2016
Guest post by Samir Khuller on Humans, Machines, and the Future of Work (a workshop)
Guest Post from Samir Khuller on
Humans, Machines, and the Future of Work
(A Workshop)
On Dec 5th and 6th I attended, at Rice University,
a workshop on Humans, Machines, and the Future of Work.
I had a wonderful lunch with John Markoff of NY Times,
and Moshe Vardi of Rice University (Moshe is the brains
behind this workshop).
I have rarely ever attended such meetings, that are so different
from the usual conferences I attend where the talks are all
rather technical and half the audience is lost half way through the talk.
The main theme poses the speculative question about the evolving
nature of work and how technology, and especially recent
advances in AI and Robotics (Self driving cars, Robots to look
after the elderly, self checkout machines that can simply
recognize your shopping order) can render huge categories of
jobs redundant. Why hire a human, when a robot will cook
dinner, and clean your home every day? What will people spend their time doing?
Humans, Machines, and the Future of Work
(A Workshop)
On Dec 5th and 6th I attended, at Rice University,
a workshop on Humans, Machines, and the Future of Work.
I had a wonderful lunch with John Markoff of NY Times,
and Moshe Vardi of Rice University (Moshe is the brains
behind this workshop).
I have rarely ever attended such meetings, that are so different
from the usual conferences I attend where the talks are all
rather technical and half the audience is lost half way through the talk.
The main theme poses the speculative question about the evolving
nature of work and how technology, and especially recent
advances in AI and Robotics (Self driving cars, Robots to look
after the elderly, self checkout machines that can simply
recognize your shopping order) can render huge categories of
jobs redundant. Why hire a human, when a robot will cook
dinner, and clean your home every day? What will people spend their time doing?
Will the work week reduce greatly? Yet, for many, their devices and
24/7 connectivity means we are working more than ever! The
speakers had varied backgrounds, from Economics to Social Science
to Roboticists.
In the end, its clear that the nature of work is constantly evolving.
Our children could have job titles, our parents could never dream of
when they were selecting a profession. However, the rapidly growing
interest in Tech and Computing is clear - these are powerful societal
forces at work. New job titles - Robot fixer, Robot Programmer,
Virtual Reality Expert, Lawyer for the protection of Robots. Demand
for graduates with those skills will continue to increase, and the
Universities that invest quickly and early will become the dominant
players. CMU, Georgia Tech and others invested very early in colleges
of computing, and this has paid of handsomely for them. The demand for
CS degrees is going to increase even more in the next decade
with research moving more closely to the impact that technology is
having on society. We as researchers, need to pay attention to
societal impact, since its very real and immediate.
The workshop speakers were pretty amazing - talks were accessible,
and extremely well delivered. I suppose such talks meant for a really
broad audience (there actually were not many CS people in the audience)
and thus the points being made are pretty high level.
I would recommend that we all become better acquainted with some of
the social issues that interact with Computer Science, and going
to workshops like this is a great start.
Thursday, December 08, 2016
Fixing the Academic Job Market
Last month I posted about the craziness of the computer science academic job market due mainly to the decentralized nature of our field. Here are some ideas of what we can do better. I've stolen some of these ideas from other fields such as math and economics.
Single Job Market System
A single website that every CS department uses to advertise positions and accept applications. Easy for applicants to apply to multiple places and, if they wish, make their materials open for any department to see. Recommenders need only upload their letter once. We could add some additional controls, methods for applicants to indicate say geographical preferences or two-body issues.
We could have an opt-in site for candidates to list where and when they have interviews. It would make coordination of job interview dates and offer timing that much simpler.
Annual Meeting
A meeting in early January that brings together all the subfields of computing so we can work as one community instead of twenty. As part of this meeting, members of recruiting committees and job candidates come and schedule short interviews with each other to make preliminary choices for on-campus interviews. CS hasn't had annual meetings since the 80's but math and econ still do.
Virtual Meetings
Every time I bring up annual meetings, people complain that we already have too many conferences in computer science. So if no physical meeting, we can set aside days to have a virtual meeting with recruiting committees and candidates talking over Skype.
Common Dates
Have some common fixed dates, just a couple of times in the spring, when departments can present offers, and when candidates must make a decision. That should reduce how long departments have to hold a position before it settles.
These last two ideas require no centralization, just willing job candidates.
Job Market Paper and Video
As the recent CRA Best Practices Memo suggests, candidates should choose a single paper to highlight their research. Each candidate should also post a short (3-5 minute) video where they describe this research at a level that any computer scientist could follow. The job talk should cover this paper only, instead of trying to wow with multiple works.
Candidate Web Page
If you are publicly looking for a job, set up a web page, linked from your home page, to your job materials: CV, research statement, teaching statement, list of references, pointers to all your papers with links to PDFs, with the aforementioned job market paper and video highlighted. Also give a pointer to your Google Scholar profile page and make sure that page is correct and up to date.
Single Job Market System
A single website that every CS department uses to advertise positions and accept applications. Easy for applicants to apply to multiple places and, if they wish, make their materials open for any department to see. Recommenders need only upload their letter once. We could add some additional controls, methods for applicants to indicate say geographical preferences or two-body issues.
We could have an opt-in site for candidates to list where and when they have interviews. It would make coordination of job interview dates and offer timing that much simpler.
Annual Meeting
A meeting in early January that brings together all the subfields of computing so we can work as one community instead of twenty. As part of this meeting, members of recruiting committees and job candidates come and schedule short interviews with each other to make preliminary choices for on-campus interviews. CS hasn't had annual meetings since the 80's but math and econ still do.
Virtual Meetings
Every time I bring up annual meetings, people complain that we already have too many conferences in computer science. So if no physical meeting, we can set aside days to have a virtual meeting with recruiting committees and candidates talking over Skype.
Common Dates
Have some common fixed dates, just a couple of times in the spring, when departments can present offers, and when candidates must make a decision. That should reduce how long departments have to hold a position before it settles.
These last two ideas require no centralization, just willing job candidates.
Job Market Paper and Video
As the recent CRA Best Practices Memo suggests, candidates should choose a single paper to highlight their research. Each candidate should also post a short (3-5 minute) video where they describe this research at a level that any computer scientist could follow. The job talk should cover this paper only, instead of trying to wow with multiple works.
Candidate Web Page
If you are publicly looking for a job, set up a web page, linked from your home page, to your job materials: CV, research statement, teaching statement, list of references, pointers to all your papers with links to PDFs, with the aforementioned job market paper and video highlighted. Also give a pointer to your Google Scholar profile page and make sure that page is correct and up to date.
Tuesday, December 06, 2016
A students unusual proof might be a better proof
I asked a student to show that between any two rationals is a rational.
She did the following: if x < y are rational then take δ << y-x and rational and use x+δ
This is correct though more complicated then what I had in mind: (x+y)/2
I then asked her to prove that between two irrationals is an irrational.
She did the following: if x < y are irrational then take δ << y-x and rational and use x+δ
1) Which proof for rationals is better:
The (x+y)/2 proof is simple, but the delta-proof also works for irrationals.
Which proof do you prefer? Why? What is your criteria?
Friday, December 02, 2016
Guest post by Samir Khuller about Visiting the Simons theory Inst for Computing
Guest post by Samir Khuller on his visit to the Simons Inst. of Computing
Visiting the Simons Institute for Theory of Computing
A few days back I had the good fortune to spend four days at the
Simons Theory inst. at UC Berkeley (for a workshop titled
Learning, Algorihtm Design, and Beyond Worst Case Analysis,)
Not only was the workshop extremely well run, the entire Institute is
amazing - from wide open spaces for collaboration to an excellent staff
and amenities for visitors (especially long term visitors who have
shared office space). We were missing a Dagstuhl like venue in the US
for a long time, and I think the Simons Theory Center partially makes
up for this deficiency. Dagstuhl is of course a bit isolated, so people
have no-where to go but interact, and work, and that is part
of the charm. Berkeley isn’t isolated and has some of the most amazing
cafes and restaurants right next to its charming campus. I ended up staying
on campus at their Faculty Club.
Simons workshops are over-subscribed and in high demand, but even on
Thursday evening (my last day there) I noticed that scores of badges
had not yet been picked up. Part of the problem lies in the fact that
registration is free. Perhaps charging $10/day for registration will
Visiting the Simons Institute for Theory of Computing
A few days back I had the good fortune to spend four days at the
Simons Theory inst. at UC Berkeley (for a workshop titled
Learning, Algorihtm Design, and Beyond Worst Case Analysis,)
Not only was the workshop extremely well run, the entire Institute is
amazing - from wide open spaces for collaboration to an excellent staff
and amenities for visitors (especially long term visitors who have
shared office space). We were missing a Dagstuhl like venue in the US
for a long time, and I think the Simons Theory Center partially makes
up for this deficiency. Dagstuhl is of course a bit isolated, so people
have no-where to go but interact, and work, and that is part
of the charm. Berkeley isn’t isolated and has some of the most amazing
cafes and restaurants right next to its charming campus. I ended up staying
on campus at their Faculty Club.
Simons workshops are over-subscribed and in high demand, but even on
Thursday evening (my last day there) I noticed that scores of badges
had not yet been picked up. Part of the problem lies in the fact that
registration is free. Perhaps charging $10/day for registration will
ensure that people sign up for the days that they intend to be there,
so others who want to attend are not locked out (to be fair, I don’t
know how many were locked out, but I have the impression that due to
high demand, workshops have closed registration in the past).
Kudos to both the Simons Institute organizers as well as the ones
for the special semester and workshop! I did visit Simons once in 2014,
for just 15 minutes when we were visiting places to see for the design
of our very own Brendan Iribe Center for Computer Science and Innovation.
The construction of which launched in summer 2016, and we hope to be done
by August 2018. Come and visit after that! This project has consumed the
last 30 months of my life.
One of the main benefits of attending these workshops is to see sub-fields
as they form - long before new courses emerge covering these topics - and
it’s inspiring to meet the leaders in our field running the
program on Algorithms and Uncertainty. Finally, if you missed it,
videos of talks are available online here.
Wednesday, November 30, 2016
A Few Interesting Computer Science Papers
Probabilistic Rank and Matrix Rigidity by Josh Alman and Ryan Williams
Leslie Valiant outlined an approach to proving circuit lower bounds from matrix rigidity. A matrix is rigid if you need to change many entries to significantly reduce the rank. Valiant showed that one could create a function with high algebraic circuit complexity from a rigid matrix. Of course one needs a rigid matrix and the Hadamard matrices (rows are Hadamard codes) seemed the perfect candidate. Alman and Williams say not so fast, in fact the Hadamard matrices are not that rigid and won't work for Valiant's program.
Ryoan: A Distributed Sandbox for Untrusted Computation on Secret Data by Tyler Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, and Emmett Witchel
You want to use Intuit's TurboTax to prepare your taxes but you don't want to actually let Intuit or their cloud provider to see your income sources. Sounds like a job for homomorphic encryption, a good solution in theory but not yet practical. Ryoan, an OSDI best paper awardee, takes a systems approach to the same problem, creating a computing sandbox where one can do secure computation without allowing the cloud or software provider access.
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation (and related blog post) by Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes and Jeffrey Dean
Google now using a single system to translate between different pairs of languages, even pairs of languages for which it has no examples. Their analysis shows a potential internal representation of the language and the beginning of a universal translator.
Leslie Valiant outlined an approach to proving circuit lower bounds from matrix rigidity. A matrix is rigid if you need to change many entries to significantly reduce the rank. Valiant showed that one could create a function with high algebraic circuit complexity from a rigid matrix. Of course one needs a rigid matrix and the Hadamard matrices (rows are Hadamard codes) seemed the perfect candidate. Alman and Williams say not so fast, in fact the Hadamard matrices are not that rigid and won't work for Valiant's program.
Ryoan: A Distributed Sandbox for Untrusted Computation on Secret Data by Tyler Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, and Emmett Witchel
You want to use Intuit's TurboTax to prepare your taxes but you don't want to actually let Intuit or their cloud provider to see your income sources. Sounds like a job for homomorphic encryption, a good solution in theory but not yet practical. Ryoan, an OSDI best paper awardee, takes a systems approach to the same problem, creating a computing sandbox where one can do secure computation without allowing the cloud or software provider access.
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation (and related blog post) by Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes and Jeffrey Dean
Google now using a single system to translate between different pairs of languages, even pairs of languages for which it has no examples. Their analysis shows a potential internal representation of the language and the beginning of a universal translator.
Wednesday, November 23, 2016
Music Theory for Theorists
I fully completed and passed my first MOOC, Fundamentals of Music Theory, a Coursera course out of the University of Edinburgh. I played the tuba in college and grad school but really just practiced scales and (usually) hit the notes. I didn't really understand keys, intervals, chords and the such and I wanted to learn.
Music theory has the same problem as C++, too much overloaded notation before you really understand what's going on. So here's a quick view of music theory without the mess.
Think (theoretically) of an bi-infinite sequence of notes. Every note is equivalent to notes a factor of twelve away from them (though in different octaves). Pick a note, label it 1, then the notes labelled 1, 3, 5, 6, 8, 10, 12, 1 (an octave higher) form a major scale in the key of note 1. There are twelve major scales, depending on which note you label 1. If you start at 10, i.e., 10, 12, 1, 3, 5, 6, 8, 10 you get the relative minor scale in the key of note 10. Also twelve minor scales.
Start a new major scale starting from 8 you'll get 8, 10, 12, 1, 3, 5, 7, 8, just one number off from the previous scale. Repeating this process will run through all the major scales. This is called the circle of fifths (since 8 is the fifth note of the scale). You can go through the circle backwards by starting at 6.
A chord is typically three or four alternating notes of a scale, for example 1, 5, 8 (the tonic) and 8, 12, 3, 6 (the dominant 7th). They can be inverted using different octaves and all sorts of other good stuff.
So notation wise, there is a mapping of {A,B,C,D,E,F,G} X {Flat, Natural, Sharp} to the twelve notes with several notes getting two different names in such a way that each note of any scale can get one of the letters in cyclic order. For example the D-Major scale would be the notes D-Natural, E-Natural, F-Sharp, G-Natural, A-Natural, B-Natural, C-Sharp and back to D-Natural. You put it all together in a diagram called a musical score.
Many undergrad computing courses now use Python instead of C++ to get around the notational challenges. Alas music notation goes back a millennium or so, I don't expect any changes soon.
Music theory has the same problem as C++, too much overloaded notation before you really understand what's going on. So here's a quick view of music theory without the mess.
Think (theoretically) of an bi-infinite sequence of notes. Every note is equivalent to notes a factor of twelve away from them (though in different octaves). Pick a note, label it 1, then the notes labelled 1, 3, 5, 6, 8, 10, 12, 1 (an octave higher) form a major scale in the key of note 1. There are twelve major scales, depending on which note you label 1. If you start at 10, i.e., 10, 12, 1, 3, 5, 6, 8, 10 you get the relative minor scale in the key of note 10. Also twelve minor scales.
Start a new major scale starting from 8 you'll get 8, 10, 12, 1, 3, 5, 7, 8, just one number off from the previous scale. Repeating this process will run through all the major scales. This is called the circle of fifths (since 8 is the fifth note of the scale). You can go through the circle backwards by starting at 6.
A chord is typically three or four alternating notes of a scale, for example 1, 5, 8 (the tonic) and 8, 12, 3, 6 (the dominant 7th). They can be inverted using different octaves and all sorts of other good stuff.
So notation wise, there is a mapping of {A,B,C,D,E,F,G} X {Flat, Natural, Sharp} to the twelve notes with several notes getting two different names in such a way that each note of any scale can get one of the letters in cyclic order. For example the D-Major scale would be the notes D-Natural, E-Natural, F-Sharp, G-Natural, A-Natural, B-Natural, C-Sharp and back to D-Natural. You put it all together in a diagram called a musical score.
Many undergrad computing courses now use Python instead of C++ to get around the notational challenges. Alas music notation goes back a millennium or so, I don't expect any changes soon.
Monday, November 21, 2016
Should I tell HS students that if they do well I'll write them a letter OR do I want them to ...
I teach a 3-week intense course for HS students on cryptography during the summer. Some of the students are very good, interested, and working hard. I also give out some extra credit assignments that some of them do. For such students I am glad to write letters of rec for college. I want to reward them for working hard and being interested not knowing that they will get a letter for it.
These students don't quite know about colleges and letters and that stuff. I have two choices:
1) The first day tell them that if they do well I could write a letter for them.
PRO- this will encourage some of them to work harder
CON- I want them to work hard and BE interested (not just SHOW interest, though I doubt they are that good as actors) NOT because of some reward but because they really are interested. Contrast:
Bob showed an intense interest in computer science
to
Bob showed an intense interest in computer science but it may have been just for show to get this letter. Gee, I can't tell.
2) Don't tell them. This way when they work hard and show interest its more likely genuine.
PRO- as noted, their behavior is genuine
CON- They may not work as hard.
I tend to do (1) now- tell them. One thing, don't tell them too much and make it simple. I used to say that I would write a letter even for a B student if there were extenuating circumstances or if I could tell they really were good and just happened to blow an exam, or something like that. But this just invites more questions about what they need to get a letter, and I've never had one of those cases where a B-student is better than he looks. (This could be because my exams are of the type where there is no time pressure as evidenced by half the students leaving about half way through. They are not easy, but they depend on knowledge not cleverness, so more time does not help. Or it could be something else about my teaching and grading.)
So- better to tell them about a letter option or not?
Thursday, November 17, 2016
Computer Science Academic Hiring
My faculty recruiting season in 2016 never stopped. The spring recruiting season bled through summer right into various fall recruiting activities. How I envy my friends in other fields whose faculty recruiting season runs in less than three months.
Economist Noah Smith writes about the process he went through in the econ job market in 2012.
Academic fields act administratively like their field. So econ creates a structured market with the goal of an efficient (in their sense of the right people get the right jobs) outcome.
Computer science abhors centralized control. We lack a strong central authority and have very little coordination between different departments. So we have a process that takes a long time to work its way through and quite often fails to get that efficient outcome. Even the CRA's modest 2010 proposal for some common earlier deadlines went nowhere.
What can we do? I welcome suggestions and will give some of my own in a future post.
Economist Noah Smith writes about the process he went through in the econ job market in 2012.
After applications go out, employers contact grad students for interviews. These interviews take place at the AEA's Annual Meeting in early January, which is a gigantic confab where most of the economists in the country go to eat free food and hobnob. As an interviewee, you generally don't have time to go to any of the presentations or speeches; you're running back and forth from hotel to hotel, going to interview after interview. This year's AEA Meeting was in Chicago.
You then wait a week or two, and the employers who are interested in you call you back and schedule a flyout. Flyouts, which happen in February and March, involve going to the place, meeting the people, getting taken to dinner, and presenting your research. After that, offers go out.Ahh to be done with recruiting before April Fools Day. Why can't we do this in Computer Science? In one word: Decentralization.
Academic fields act administratively like their field. So econ creates a structured market with the goal of an efficient (in their sense of the right people get the right jobs) outcome.
Computer science abhors centralized control. We lack a strong central authority and have very little coordination between different departments. So we have a process that takes a long time to work its way through and quite often fails to get that efficient outcome. Even the CRA's modest 2010 proposal for some common earlier deadlines went nowhere.
What can we do? I welcome suggestions and will give some of my own in a future post.
Sunday, November 13, 2016
Did you do research when you were an undergrad? Well.. it was a different time.
Dan is a high school student who has worked with me and is now an ugrad at College Park
DAN: Bill, did you do research when you were in high school?
BILL: No, things were different then, there wasn't really a mechanism for it. Or, as you young people would say, it wasn't a thing.
DAN: When as an undergrad did you begin to do research?
BILL: Well, I never did research as an ugrad either.
DAN: Then how did you get into Harvard? Did they accidentally put your folder in the wrong pile?
BILL: Good question. I'll ask my adviser if he recalls what went wrong back in 1980.
DAN: Really? You'll ask him?
BILL: NO!
Okay, the real question here is:
At some point (before 1980) doing research at an ugrad in college was not really a thing many people did.
And now its quite common.
What changed? Is the change for the good? I'll phrase my thoughts and questions in the form of a question (the opposite of Jeopardy).
1) Are there more magnet schools and more ways to funnel students into research early?
2) Do computers allow students to to more research than they could have back before (say) 1980?
3) Is the phrase `HS research' a bit odd- it is more learning then doing?
4) Is there a cycle going on- since more students are doing research, more have to do research to keep up in order to get into grad school? Is this even an issue for college admissions and/or scholarships?
5) Has the quality of research at HS science competitions increased ? Ugrad research? Grad student research? Prof research?
6) Does having HS or ugrads do research with you increase or decrease your productivity? (That depends on the students of course)
7) Do colleges give faculty kudos for guiding HS or ugrad research? (That depends on the school of course)
8) Are there more grants for ugrad research then before 1980? Are REU programs fairly new?
9) Does an ugrad now NEED to have done research to get into one of the top X grad schools? If so is this a good thing or not? Does this hurt those without the oppurtunity to do research? Do REU programs help alleviate this problem?
10) Are students entering grad school knowing far more than they used to?
11) Some magnet schools REQUIRE HS students to do research. Is this a good thing?
12) I've been using 1980 as a reference for no good reason- when did HS research and ugrad research increase? Was it gradual or was there some big increase over some X years where X is small?
Thursday, November 10, 2016
Our Role and Responsibility
It's Trump but not just Trump. Brexit. Right-wing politicians gaining power in France, Netherlands, Austria and beyond. We worry about the future of our country and the whole world at large.
But what role have we played? By we I mean those of us who work in technology-related fields and have benefited the most from the new economy.
There's a large population who's lives haven't gotten better. Not just the poor, we're talking the lower 80% of the economy. Our changing economy hasn't greatly improved the life of the middle class.
Our technologies have put people out of jobs. We tell them our technology will create more and better jobs, our own version of trickle-down economics. They just need to learn what we have created, drive our Ubers, at least until the Ubers can drive themselves.
We love working at universities where we can educate the very top students. But many don't get the opportunity to go to college at all. They are honest people willing to work hard and not seeing their lives improving. And we just call them "uneducated".
We have learned how to mine data but often forget the people behind the data and get caught by surprise when things don't go the way we expected from the data.
We've created new communication channels. But we also control those channels. We often make the choices between usability, security, privacy, fairness, free speech and preventing hatred and discrimination. But who are we to make those choices. They have little say.
Who are they? They are us, fellow Americans. We need to make sure that all of us benefit from our changing world and everyone feels that they contribute to making it better and everyone has a say. Democracy, messy as it is, means that if people don't feel they the world looks out for them, they will find someone who promises change no matter the baggage they carry. And we need to listen or we end up with Donald Trump.
Monday, November 07, 2016
And the winner is... Harambe
Tomorow is the Election for Prez of the USA! This post is non-partisan but, in the interest of full disclosure, I state my politics: I will be voting for Hillary Clinton. I urge you to read the posts by Scott Aaronson: here and by Terry Tao here or John Oliver's segment here to see why. I would have thought that 99% of their readers, and mine, are voting for Clinton, but looking at the comments on Scott's an Terry's blog I realize that might not be true. (ADDED LATER- A nice article that summarizes the entire election, I put it here for you and for me: here)
However, I am not blogging about who you should vote for.
I had the teachers of Discrete Math and Algorithms (two diff courses) give their students paper ballots with the four candidates on the Maryland ballot (Clinton, Trump, Johnson, Stein, write-in) and had them vote. But with a difference:
Discrete Math did the standard voting where you vote for ONE candidate
Algorithms did approval voting- where you mark all those you approve of.
DISCRETE MATH RESULTS: 428 students.
Clinton: 305
Trump: 44
Johnson:21
Stein:11:
Abstain (or some variant like `f**k all of them')-7
Sanders-6
Harambe. The Gorilla who was shot (see here)-3
Hugh Mungus (word play - you figure it out. Also see here)-3
Ken Bone (he asked a good question at the Town Hall Debate. Then...)-2
Vermin Supreme (Policy: All Americans must brush their teeth. See here)-2
Joe Yecco (don't know who it is. Maybe a student in the class) -2
For the people who got 1 vote see here. There were 22 of these.
ALGORITHMS RESULTS. 251 students (won't add up since its approval voting)
Clinton-155
Johnson-63
Stein-59
Trump-40
Sanders-18
Abstain-14
Clyde Kruskal (who teaches the course)-13
Tom Reinhart (a CS lecturer at UMCP)-5
Larry Herman (a CS lecturer at UMCP)-3
John Kasich-3
Evan McMullen (Third Party Candidate who is doing well in Utah)-3 See here
Vernin Supreme (see above)-3
Harambe the Gorilla (really!) -2
Michelle Obama-2
Eric Sim (don't know who this is- Maybe a student in the class)-2
For people who got 1 vote see here
For more complete information see here
What to make of all of this?
College Students are largely Liberal so the pro-Clinton vote is not surprising. Also Maryland is a strongly democratic state, though I think being a college is more of a factor than being in Maryland. I suspect that students at UT Austin are also liberal even though the state isn't.
I've had large classes vote before, but not by approval voting. Both of the classes above had more write-ins than usual- though that is less surprising for approval voting.
Approval voting shows that Trump is very few people's second choice. That is, those that don't like him REALLY don't like him. Again, this is among the narrow demographic of college students taking Algorithms at UMCP CS.
Any odd candidate who gets ONE vote does not surprise me. If ONE person votes for Aliko Dangote, a real (see here) Nigerian Billionaire (he has NOT emailed you) that is not surprising.
If someone who is sort-of out there get TWO or more votes (Vernin Supreme, Hugh Mungus) I am not surprised. Those two have websites and a following, though small.
The three votes for Harambe the Gorilla surprised me. I've looked on the web, and this is NOT a thing- there is no Harmabe for Prez websites, even as a joke (I assume if they existed they would be a joke). So I am surprised he got 3 votes in DM and 2 vote in Algorithms. One of those voted for Clinton, Johnson, Stein, and Harambe. I'm assuming that's a strong anti-trump vote.
Does this poll have any predictive value? The winner of the Discrete Math Poll ended up being the real winner in 1992, 1996 (though there were more Ross Perot Votes then I would have thought),
2008, and 2012. Note that in all these cases the democratic candidate won. Not a surprise.
Some personal opinions:
1) If Trump loses what will the republicans learn from this. NOTHING since Trump was such an odd candidate. I wish it would have been Hillary (or someone like her) vs Cruz (or someone on the hard right) so that if they lose the reps can't say, as some have, we lost because we weren't conservative enough. And one can hope that Hillary vs Cruz would have more of a contest of ideas.
2) I'm not sure what would be better for the Rep Party- a Trump loss or a Trump win.
3) Predictwise says a Hillary win is around 85% likely. Nate (the only pollster known by his first name!) says 65%. If Hillary wins it won't be clear who was right? If Trump wins then Nate would have been more right, but not sure what that means. More generally:
If two people give different predictions, in the end you can see who was right
If two people give different probabilities of an event, then its harder to see who was right.
However, I am not blogging about who you should vote for.
I had the teachers of Discrete Math and Algorithms (two diff courses) give their students paper ballots with the four candidates on the Maryland ballot (Clinton, Trump, Johnson, Stein, write-in) and had them vote. But with a difference:
Discrete Math did the standard voting where you vote for ONE candidate
Algorithms did approval voting- where you mark all those you approve of.
DISCRETE MATH RESULTS: 428 students.
Clinton: 305
Trump: 44
Johnson:21
Stein:11:
Abstain (or some variant like `f**k all of them')-7
Sanders-6
Harambe. The Gorilla who was shot (see here)-3
Hugh Mungus (word play - you figure it out. Also see here)-3
Ken Bone (he asked a good question at the Town Hall Debate. Then...)-2
Vermin Supreme (Policy: All Americans must brush their teeth. See here)-2
Joe Yecco (don't know who it is. Maybe a student in the class) -2
For the people who got 1 vote see here. There were 22 of these.
ALGORITHMS RESULTS. 251 students (won't add up since its approval voting)
Clinton-155
Johnson-63
Stein-59
Trump-40
Sanders-18
Abstain-14
Clyde Kruskal (who teaches the course)-13
Tom Reinhart (a CS lecturer at UMCP)-5
Larry Herman (a CS lecturer at UMCP)-3
John Kasich-3
Evan McMullen (Third Party Candidate who is doing well in Utah)-3 See here
Vernin Supreme (see above)-3
Harambe the Gorilla (really!) -2
Michelle Obama-2
Eric Sim (don't know who this is- Maybe a student in the class)-2
For people who got 1 vote see here
For more complete information see here
What to make of all of this?
College Students are largely Liberal so the pro-Clinton vote is not surprising. Also Maryland is a strongly democratic state, though I think being a college is more of a factor than being in Maryland. I suspect that students at UT Austin are also liberal even though the state isn't.
I've had large classes vote before, but not by approval voting. Both of the classes above had more write-ins than usual- though that is less surprising for approval voting.
Approval voting shows that Trump is very few people's second choice. That is, those that don't like him REALLY don't like him. Again, this is among the narrow demographic of college students taking Algorithms at UMCP CS.
Any odd candidate who gets ONE vote does not surprise me. If ONE person votes for Aliko Dangote, a real (see here) Nigerian Billionaire (he has NOT emailed you) that is not surprising.
If someone who is sort-of out there get TWO or more votes (Vernin Supreme, Hugh Mungus) I am not surprised. Those two have websites and a following, though small.
The three votes for Harambe the Gorilla surprised me. I've looked on the web, and this is NOT a thing- there is no Harmabe for Prez websites, even as a joke (I assume if they existed they would be a joke). So I am surprised he got 3 votes in DM and 2 vote in Algorithms. One of those voted for Clinton, Johnson, Stein, and Harambe. I'm assuming that's a strong anti-trump vote.
Does this poll have any predictive value? The winner of the Discrete Math Poll ended up being the real winner in 1992, 1996 (though there were more Ross Perot Votes then I would have thought),
2008, and 2012. Note that in all these cases the democratic candidate won. Not a surprise.
Some personal opinions:
1) If Trump loses what will the republicans learn from this. NOTHING since Trump was such an odd candidate. I wish it would have been Hillary (or someone like her) vs Cruz (or someone on the hard right) so that if they lose the reps can't say, as some have, we lost because we weren't conservative enough. And one can hope that Hillary vs Cruz would have more of a contest of ideas.
2) I'm not sure what would be better for the Rep Party- a Trump loss or a Trump win.
3) Predictwise says a Hillary win is around 85% likely. Nate (the only pollster known by his first name!) says 65%. If Hillary wins it won't be clear who was right? If Trump wins then Nate would have been more right, but not sure what that means. More generally:
If two people give different predictions, in the end you can see who was right
If two people give different probabilities of an event, then its harder to see who was right.
Thursday, November 03, 2016
Do We Still Need Great Minds?
An anonymous comment on a recent post on alternate histories.
Of course we can never know. We can't tell if some great idea today may not have existed if a single genius didn't create it. We also don't know what technology we don't have because of someone who became a playwright instead of a scientist.
I don't doubt we'd have the P v NP question without Cook, Karp and Levin, though our understanding of the problem would have a very different flavor.
Take someone like Manuel Blum. Through him and his students we got formal models of cryptography that led to zero-knowledge proofs, interactive proofs, probabilistically checkable proofs, lower bounds on approximation and major advances in coding theory, among so much more. Would we have all this work if Manuel never existed? Maybe, eventually, but we'd live in a whole different theory world without him. And that world would always look different until we find a new Manuel to unify it.
As far as science is concerned I don't believe that the great minds are needed anymore. They only speed things up that would later have been developed by an industry of a great mass of brilliant but more mediocre researchers. The best examples are Godel's and Turing's work which would have been done necessarily by the next generation of logicians or theoretical computer scientists. Regarding your own contributions it is fair to say that "the future doesn't need you." Anything important would also be created sooner or later by others, only some idiosyncratic work of minor importance could be left undone forever.Depressing. So we do either mediocre work or work that others would later do anyway.
Of course we can never know. We can't tell if some great idea today may not have existed if a single genius didn't create it. We also don't know what technology we don't have because of someone who became a playwright instead of a scientist.
I don't doubt we'd have the P v NP question without Cook, Karp and Levin, though our understanding of the problem would have a very different flavor.
Take someone like Manuel Blum. Through him and his students we got formal models of cryptography that led to zero-knowledge proofs, interactive proofs, probabilistically checkable proofs, lower bounds on approximation and major advances in coding theory, among so much more. Would we have all this work if Manuel never existed? Maybe, eventually, but we'd live in a whole different theory world without him. And that world would always look different until we find a new Manuel to unify it.
Monday, October 31, 2016
My chair wants me to post about.. Mohammad Haj wants me to post about.. I want to post about...
Maryland is looking to hire lecturers and my chai Samir wants me to post it on my blog. I think he overestimates the power of the blog, however, here is the link. I could use this to launch into a post about either the value of lecturers or how to handle the fact that CS enrollment is so high, but I've already posted on those in the past: here and here. (ADD- Actually my chair wanted me to post on all of our hiring which also includes professors, so see here)
Mohammad Hajiaghayi is the Program chair of the SPAA conference and wants me to post it on my blog. I think he overestimates the power of the blog, however here is the link. I could use this to launch into a post about the prestige-conference model of academia, or other conference issues, or parallelism, but these topics have also already appeared on the blog, mostly by Lance.
But here is the real question: Is it easier or harder to get information (e.g., UMCP CS is looking for a lecturer, SPAA call for papers is out) than it used to be.
EASIER: We have so many different mechanisms. Blogs, email, FACEBOOK, Twitter. Conferences used to have posters as well- I don't think I"ve seen one for a while. (If someone knows where some of the
old posters are, on line, please leave a comment- some of them were real neat!).
HARDER: we all get too much email (for those still on email), and get too much input in general. Hence things can get lost. I'm on so many mailing lists for talks that I actually MISS talks I want to goto since I tend to ignore ALL of those emails.
If you WANT to find something out, is that easier. If you want to find out
when is the SPAA conference
yes- you can Google it.
For something like
what schools are advertising they want to hire a lecturer?
prob yes though I am not quite sure where to look.
There are things out there that if you knew about them you would want to know, but you are not quite sure how to look. Do we come across them more or less than we used to. I think more since, at least in my case, people I know email me stuff they think I will care about and they are usually right (Ramsey Theory, Cake Cutting papers, and satires of Nobel Laureate Bob Dylan find there way to my inbox.)
Mohammad Hajiaghayi is the Program chair of the SPAA conference and wants me to post it on my blog. I think he overestimates the power of the blog, however here is the link. I could use this to launch into a post about the prestige-conference model of academia, or other conference issues, or parallelism, but these topics have also already appeared on the blog, mostly by Lance.
But here is the real question: Is it easier or harder to get information (e.g., UMCP CS is looking for a lecturer, SPAA call for papers is out) than it used to be.
EASIER: We have so many different mechanisms. Blogs, email, FACEBOOK, Twitter. Conferences used to have posters as well- I don't think I"ve seen one for a while. (If someone knows where some of the
old posters are, on line, please leave a comment- some of them were real neat!).
HARDER: we all get too much email (for those still on email), and get too much input in general. Hence things can get lost. I'm on so many mailing lists for talks that I actually MISS talks I want to goto since I tend to ignore ALL of those emails.
If you WANT to find something out, is that easier. If you want to find out
when is the SPAA conference
yes- you can Google it.
For something like
what schools are advertising they want to hire a lecturer?
prob yes though I am not quite sure where to look.
There are things out there that if you knew about them you would want to know, but you are not quite sure how to look. Do we come across them more or less than we used to. I think more since, at least in my case, people I know email me stuff they think I will care about and they are usually right (Ramsey Theory, Cake Cutting papers, and satires of Nobel Laureate Bob Dylan find there way to my inbox.)
Thursday, October 27, 2016
Get Ready
My daughters, now in college, never knew a time when they couldn't communicate with anyone instantaneously. Molly, now 18, takes pride having the same birth year as Google. They have never in their memory seen a true technological change that so dramatically affects the world they live in. But they are about to.
The 50's and 60's saw a transportation revolution. The Interstate highway system made local and national car and truck travel feasible. The jet engine allowed short trips to faraway places. The shipping container made transporting a good, made anywhere in the world, cheaper than producing it.
We could have national and worldwide academic conferences. China became a superpower by shipping us low-cost goods. Dock worker jobs, the kind held by Archie Bunker, have morphed and shrunk, even as the number of imported goods has grown.
In the 90's we had a communications revolution. The cell phone kept us connected all the time. The home computer and the Internet gave us immediate access to the world's information and Google helped us sort through it.
It fundamentally changed how we interacted. No longer did we need to make plans in advance. Eventually we would have little need for encyclopedias, almanacs, maps, or physical media for music, photos and movies. Not to mention new types of companies and the transformation of how businesses could work with their employees and contractors spread across the world.
That brings us to today. We are at the brink, if it hasn't already started, of an intelligence revolution, the combination of big data, machine learning and automation. The initial obvious transformation will come from autonomous cars, which will change not only how we get from point A to point B but how we plan roads, businesses and where we live. Beyond that work itself will change as we will continue to automate an ever growing number of white collar jobs. As with every transformation, the world we live in will change in unforeseen ways, hopefully more good than bad.
I really look forward to watching these changes through my daughter's eyes, to see how this new society directly affects them as they start their working lives. And how their children will one day see an old-fashioned automobile with relics like foot pedals and a steering wheel and be shocked that their parents once drove cars themselves.
The 50's and 60's saw a transportation revolution. The Interstate highway system made local and national car and truck travel feasible. The jet engine allowed short trips to faraway places. The shipping container made transporting a good, made anywhere in the world, cheaper than producing it.
We could have national and worldwide academic conferences. China became a superpower by shipping us low-cost goods. Dock worker jobs, the kind held by Archie Bunker, have morphed and shrunk, even as the number of imported goods has grown.
In the 90's we had a communications revolution. The cell phone kept us connected all the time. The home computer and the Internet gave us immediate access to the world's information and Google helped us sort through it.
It fundamentally changed how we interacted. No longer did we need to make plans in advance. Eventually we would have little need for encyclopedias, almanacs, maps, or physical media for music, photos and movies. Not to mention new types of companies and the transformation of how businesses could work with their employees and contractors spread across the world.
That brings us to today. We are at the brink, if it hasn't already started, of an intelligence revolution, the combination of big data, machine learning and automation. The initial obvious transformation will come from autonomous cars, which will change not only how we get from point A to point B but how we plan roads, businesses and where we live. Beyond that work itself will change as we will continue to automate an ever growing number of white collar jobs. As with every transformation, the world we live in will change in unforeseen ways, hopefully more good than bad.
I really look forward to watching these changes through my daughter's eyes, to see how this new society directly affects them as they start their working lives. And how their children will one day see an old-fashioned automobile with relics like foot pedals and a steering wheel and be shocked that their parents once drove cars themselves.
Monday, October 24, 2016
Exaggeration is one thing but this is....
This website is about the history of math and lists famous mathematicians. The ones from the 20th century are biased towards logic, but you should go there yourself and see who you think they left out.
There entry on Paul Cohen is... odd. Its here. I quote from it:
-------------------------------------
His findings were as revolutionary as Gödel’s own. Since that time, mathematicians have built up two different mathematical worlds, one in which the continuum hypothesis applies and one in which it does not, and modern mathematical proofs must insert a statement declaring whether or not the result depends on the continuum hypothesis.
-------------------------------------
When was the last time you had to put into the premise of a theorem CH or NOT-CH?
I did once here in an expository work about a problem in combinatorics that was ind of set theory since it was equivalent to CH. Before you get too excited it was a problem in infinite combinatorics having to do with coloring the reals.
I suspect that at least 99% of my readers have never had to insert a note in a paper about if they were assuming CH or NOT-CH. If you have I'd love to hear about it in the comments. And I suspect
you are a set theorist.
Paul Cohen's work was very important--- here we have an open problem in math that will always be open (thats one interpretation). And there will sometimes be other problems that are ind of ZFC or equiv to CH or something like that. But it does not affect the typical mathematician working on a typical problem.
I have a sense that its bad to exaggerate like this. One reason would be that if the reader finds out
the truth he or she will be disillusioned. But somehow, that doesn't seem to apply here. So I leave it to the reader to comment: Is it bad to exaggerate Paul Cohen's (or anyone's) accomplishments? And if so,
then why?
There entry on Paul Cohen is... odd. Its here. I quote from it:
-------------------------------------
His findings were as revolutionary as Gödel’s own. Since that time, mathematicians have built up two different mathematical worlds, one in which the continuum hypothesis applies and one in which it does not, and modern mathematical proofs must insert a statement declaring whether or not the result depends on the continuum hypothesis.
-------------------------------------
When was the last time you had to put into the premise of a theorem CH or NOT-CH?
I did once here in an expository work about a problem in combinatorics that was ind of set theory since it was equivalent to CH. Before you get too excited it was a problem in infinite combinatorics having to do with coloring the reals.
I suspect that at least 99% of my readers have never had to insert a note in a paper about if they were assuming CH or NOT-CH. If you have I'd love to hear about it in the comments. And I suspect
you are a set theorist.
Paul Cohen's work was very important--- here we have an open problem in math that will always be open (thats one interpretation). And there will sometimes be other problems that are ind of ZFC or equiv to CH or something like that. But it does not affect the typical mathematician working on a typical problem.
I have a sense that its bad to exaggerate like this. One reason would be that if the reader finds out
the truth he or she will be disillusioned. But somehow, that doesn't seem to apply here. So I leave it to the reader to comment: Is it bad to exaggerate Paul Cohen's (or anyone's) accomplishments? And if so,
then why?
Thursday, October 20, 2016
Alternate Histories
Imagine if we had a machine that let us change some earlier moment in history and see what developed. We wouldn't actually change history--that would lead to paradoxes and other disasters, but we could see what would develop. Suppose Archduke Ferdinand was never assassinated. How would that have changed 20th century history and beyond?
The same could be said for an academic field. Science flows like a story, with building results from other results, "standing on the shoulders of giants" so to say. But not all theorems are dependent on earlier ones and suppose that things happened in a different order. How would that have changed our field?
Points to ponder:
Suppose Gauss was alive today instead of two centuries ago. Would he still be as famous? Would there be a big hole in mathematics that would have been left for the current Gauss to solve?
Suppose Fermat's last theorem was still a conjecture. Would there be more budding mathematicians inspired by the wonders of that famous open problem like my generation was? Would the Clay Mathematics Institute still have produced a list of those millennial problems, including P v NP?
Suppose we knew Primes in P back in 1975. Would randomized algorithms and subsequent derandomization techniques have happened without its prime example? Same for Undirected connectivity in log space. These both are small cheats as the AKS and Reingold proofs are at their core derandomization arguments and may not have happened if we didn't think about randomized algorithms.
What if someone settled P vs NP right after Cook? Would it have stopped most of the research in computational complexity theory? Would it depend on whether P = NP was answered positively or negatively?
What if you were never born? Ultimately that would be the only true measure of your influence in the world. What if your research somehow prevented other great theorems from happening? Would you even want to see the results of that experiment?
The same could be said for an academic field. Science flows like a story, with building results from other results, "standing on the shoulders of giants" so to say. But not all theorems are dependent on earlier ones and suppose that things happened in a different order. How would that have changed our field?
Points to ponder:
Suppose Gauss was alive today instead of two centuries ago. Would he still be as famous? Would there be a big hole in mathematics that would have been left for the current Gauss to solve?
Suppose Fermat's last theorem was still a conjecture. Would there be more budding mathematicians inspired by the wonders of that famous open problem like my generation was? Would the Clay Mathematics Institute still have produced a list of those millennial problems, including P v NP?
Suppose we knew Primes in P back in 1975. Would randomized algorithms and subsequent derandomization techniques have happened without its prime example? Same for Undirected connectivity in log space. These both are small cheats as the AKS and Reingold proofs are at their core derandomization arguments and may not have happened if we didn't think about randomized algorithms.
What if someone settled P vs NP right after Cook? Would it have stopped most of the research in computational complexity theory? Would it depend on whether P = NP was answered positively or negatively?
What if you were never born? Ultimately that would be the only true measure of your influence in the world. What if your research somehow prevented other great theorems from happening? Would you even want to see the results of that experiment?
Tuesday, October 18, 2016
This university does not discriminate based on....
I recently came across the following (I delete the name of the school)
and also add my own comments in caps as they relate to UMCP hiring
of professors.
X-University, located in YZ, in hiring professors
does not discriminate on the basis of
race
color . COULD AN HBCU DISCRIMINATE AGAINST WHITE PROFESSORS?
religious HAS NEVER COME UP. COULD A RELIGIOUS SCHOOL DISCRIMINATE ON THIS BASIS?
creed I LOOKED UP HOW IT DIFFERS FROM RELIGIONS- CREED COULD BE ANY SET OF BELIEFS. WHAT IF ONE OF CANDIDATES BELIEVES ARE REPREHENSIBLE BUT WOULD NOT INTERFERE WITH THEIR JOB? JUST ASKING.
age WHAT IF YOU WANT SOMEONE THERE LONG-TERM?
and also add my own comments in caps as they relate to UMCP hiring
of professors.
X-University, located in YZ, in hiring professors
does not discriminate on the basis of
race
color . COULD AN HBCU DISCRIMINATE AGAINST WHITE PROFESSORS?
religious HAS NEVER COME UP. COULD A RELIGIOUS SCHOOL DISCRIMINATE ON THIS BASIS?
creed I LOOKED UP HOW IT DIFFERS FROM RELIGIONS- CREED COULD BE ANY SET OF BELIEFS. WHAT IF ONE OF CANDIDATES BELIEVES ARE REPREHENSIBLE BUT WOULD NOT INTERFERE WITH THEIR JOB? JUST ASKING.
age WHAT IF YOU WANT SOMEONE THERE LONG-TERM?
gender. COULD A WOMEN'S (MEN'S) SCHOOL DISCRIMINATE AGAINST MEN (WOMEN)?
gender identity or expression I THINK THIS IS A NEW ONE TO
COVER TRANS AND SIMILAR THINGS. ITS NOT SEXUAL ORIENTATION WHICH IS BELOW.
national origin HAS NEVER COME UP. EVEN SO, I WOULD NEVER HIRE A WISIAN. See here for why.
marital status WE CAN"T EVEN ASK IF THEY HAVE A SPOUSE WHICH MAY BE HARD FOR TELLING THEM SOME SELLING POINTS OF THE AREA- GOOD FOR MARRIED PEOPLE? GOOD FOR SINGLE PEOPLE? COULD A RELIGIOUS SCHOOL HOLD AGAINST THEM THAT YOU WERE LIVING WITH SOMEONE BUT NOT MARRIED TO THEM?
ancestry I CAN"T IMAGINE THIS COMING UP.
present or past history of mental disorder I"M SURPRISED ABOUT THIS ONE SINCE I WOULD THINK HAVING A PRESENT MENTAL DISORDER IS RELEVANT TO THE JOB. THEN AGAIN, GODEL AND CANTOR PROB HAD SOME MENTAL DISORDERS.
learning disability HAS NEVER COME UP. I HAVE NEVER EVEN SEEN A GRAD STUDENT WITH A LEARNING DISABILITY SO IT MIGHT NOT COME UP FOR A WHILE.
physical disability HAS NEVER COME UP. I CAN SEE IT COMING UP.
political belief HAS NEVER COME UP. I WONDER IF IT COMES UP MORE IN A POLYSCI OR HISTORY DEPT.
veteran status HAS NEVER COME UP. COULD A MILITARY SCHOOL PREFER VETERANS?
sexual orientation HAS NEVER COME UP. BUT COULD.
genetic information THIS MAY BE RELEVANT IN THE FUTURE. PERHAPS THE NEAR FUTURE. SCARY?
non-position-related criminal record. WHICH CRIMES ARE POSITION RELATED? THIS COULD BE A LEGAL QUAGMIRE . MURDERING SOMEONE IN A BAR FIGHT IS NOT POSITION RELATED BUT MURDERING A STUDENT WHO COMPLAINED ABOUT A GRADE IS. OTHER CASES MAY NOT BE AS CLEAR CUT.
Thursday, October 13, 2016
2016 Fall Jobs Post
The weather cools down, the leaves change color and you start thinking about what you plan to do after you graduate. As a public service every year about this time, we offer up links and advice for the job search for PhDs in theoretical computer science.
For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.
It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world.
I expect hiring this year to be similar to recent years. Most departments looking to dramatically expand in computer science but with a preference for the more applied areas that the students and the companies that will hire them desire. You can make yourself more valuable by showing a willingness to participate and teach beyond core theory. Machine learning, data science and information security are areas of great need where theorists can play a large role.
A bad job talk can sink your job prospects. Know your audience and particularly for faculty positions create a talk that can express your results and importance to those outside of theory. Pictures help, complex formulas and heavy text work against you. Practice your talk in front of non-theory students and faculty. You will greatly increase your chances if you can sell yourself as a valuable colleague, not just a smart theorist who will prove great things in isolation.
Good luck out there and I look forward to seeing your names on the Spring 2017 jobs post.
For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.
It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world.
A bad job talk can sink your job prospects. Know your audience and particularly for faculty positions create a talk that can express your results and importance to those outside of theory. Pictures help, complex formulas and heavy text work against you. Practice your talk in front of non-theory students and faculty. You will greatly increase your chances if you can sell yourself as a valuable colleague, not just a smart theorist who will prove great things in isolation.
Good luck out there and I look forward to seeing your names on the Spring 2017 jobs post.
Tuesday, October 11, 2016
Ideal courses for a comp sci dept/I'm glad we don't do it
I once heard it said:
In our data structures course we read Knuth and ignore the proofs
In our algorithms course we read Knuth and ignore the code.
And indeed, there are many topics where the theory-part is in one course and the programming is in another.
With that in mind, here is a different way to offer courses (some depts already do some of this).
1) A course in Algorithms AND Data Structures. Actually I have seen this title on courses but its usually just a theory course. I mean you REALLY PROOF and CODE. Might need to be a year long.
2) Crypto AND Security. You learn crypto and security together and do both. If you only did the crypto relevant to security it might be a semester long and in fact it might already be being done. But I am thinking of really combining these courses--- code and prove. Might be a year long.
3) Formal lang Theory and Practice of Compilers. You do DFA, NDFA, CFG, but also do compiler design. If you also want to do P, NP, and decidability (spell check thinks that decidability is not a word!) then might not quite connect up with compilers, then again in might with theorems like: CFG equivalence is undecidable. Might be a year long.
4) Machine learning AND Prob/stat.
PROS: Theory and Practice would be more united.
CONS: Having year-long courses is hard for the students scheduling their courses. Would having just one semester of any of the above courses make sense?
CONS: Harder to find someone to teach these courses. I'll confess that I prefer to teach formal lang theory without any programming in it.
CAVEAT: I think theorists coming out now know more of the practical counterpart of their area than I did and perhaps than people of my generation did.
CAVEAT: A much less radical thing to do is to put more security in to crypto, more about compilers into Formal Lang Theory, etc. But thats a bit odd now since we DO have a course in security, and a course in compilers. Even so, seems to be a good idea and this I know many schools are doing.
In our data structures course we read Knuth and ignore the proofs
In our algorithms course we read Knuth and ignore the code.
And indeed, there are many topics where the theory-part is in one course and the programming is in another.
With that in mind, here is a different way to offer courses (some depts already do some of this).
1) A course in Algorithms AND Data Structures. Actually I have seen this title on courses but its usually just a theory course. I mean you REALLY PROOF and CODE. Might need to be a year long.
2) Crypto AND Security. You learn crypto and security together and do both. If you only did the crypto relevant to security it might be a semester long and in fact it might already be being done. But I am thinking of really combining these courses--- code and prove. Might be a year long.
3) Formal lang Theory and Practice of Compilers. You do DFA, NDFA, CFG, but also do compiler design. If you also want to do P, NP, and decidability (spell check thinks that decidability is not a word!) then might not quite connect up with compilers, then again in might with theorems like: CFG equivalence is undecidable. Might be a year long.
4) Machine learning AND Prob/stat.
PROS: Theory and Practice would be more united.
CONS: Having year-long courses is hard for the students scheduling their courses. Would having just one semester of any of the above courses make sense?
CONS: Harder to find someone to teach these courses. I'll confess that I prefer to teach formal lang theory without any programming in it.
CAVEAT: I think theorists coming out now know more of the practical counterpart of their area than I did and perhaps than people of my generation did.
CAVEAT: A much less radical thing to do is to put more security in to crypto, more about compilers into Formal Lang Theory, etc. But thats a bit odd now since we DO have a course in security, and a course in compilers. Even so, seems to be a good idea and this I know many schools are doing.
Friday, October 07, 2016
Typecasting from Avi's 60th Celebration
Lance: Live from the Institute for Advanced Study in Princeton, New Jersey, Bill and I are doing a typecast from the Avi Wigderson 60th Birthday Celebration. Hi Bill.
Bill: Hi Lance. Last time I saw you was for the Mike Sipser 60th birthday and next for Eric Allender and Michael Saks.
L: New Jersey again. The Garden State.
B: New Jersey rocks! After all Bruce Springsteen is from here.
L: So was I.
B: You both rock! You are better at math. But I don’t want to hear you sing. I got in last night. How was the first day of the conference.
L: There’s two kinds of talks at a celebration like this. Some people who survey how Avi has shaped the field and their research in particular. Scott Aaronson gave a great talk titled “Avi’s Permanent Impact on me” on how a survey talk on the permanent problem eventually led to Scott’s work on boson sampling.
Most people though are giving talks on their latest and greatest research. At least they have some good Avi jokes to start their talks.
B: So what category did Oded Goldreich’s “Canonical depth-three Boolean circuits for multi-linear functions, Multi-linear circuits with general gates, and matrix rigidity” talk fit in.
L: Oded did say the title was a joke and he did start with an Avi story, Actually it was a Silvio and Shafi story about when to leave for the airport.
B: I liked Alex Lubotzky’s ruminations on pure math versus computer science. He went into pure math thinking computer scientists had to know how to use computers. Had he known that they didn't he may have gone into computer science. He likes that his work on expanders has “applications” to computer science.
L: How did you like the rest of his talk.
B: I don’t know--it’s still going on.
L: Once I gave a talk at a logic meeting about generic oracles. People came up to me afterwards so excited that logic had such applications in computer science.
B: I liked Noga Alon’s talk “Avi, Graphs and Communication”. There is a fixed graph G on k nodes with k players each with an n bit string. How many bits do they to communicate over the edges to determine if all the strings are identical? Basically if the graph is 2-connected you need half of the trivial kn bits.
[Intermission]
L: A day has passed and we find ourselves talking again on Friday at IAS. It’s a beautiful day and we are sitting on lawn chairs on the IAS grounds. This is how Einstein must have felt.
B: Although he wouldn’t be skipping a talk on quantum physics.We are listening in through the magic of the Internet.
I liked Noam Nisan’s talk on the complexity of pricing. Perhaps surprisingly a seller can do better bundling two independent items than using individual prices. I plan to use that example in my fair division class. I may even do a short blog post.
L: I can’t wait. Madhu gave a great talk on low-degree polynomials and how error-correcting have gone from obscurity in theoretical computer science in the early 90’s to a standard tool just a few years later. And we have Madhu (inspired by Avi) to thank for that.
B: Eyal Wigderson, son of you know who, is a grad student studying neuroscience. Talked on “Brains are Better Computers than Computers” which I found flattering.
L: He was talking about rats, not your brain Bill.
B: Should I feel complemented or insulted?
L: Yes.
B: It’s kind of nice to have a birthday person’s child talk a conference like this. In TCS there are several including Paul Valiant who talked at Les Valiant’s 60th, Tal Rabin talked at Michael Rabin’s 80th, father and son Edmonds are here, and Molly Fortnow will surely talk at Lance Fortnow’s 60th. I remember when she interviewed us.
L: Let’s leave it at that.
B: Silvio Micali “knighted” Avi and used Byzantine agreement to improve bitcoin-like chains. I was enlightened to learn bitcoin is so expensive only five companies do it regularly.
L: When people claim to solve P = NP I asked them to mine a few bitcoins and get back me. I have yet to see any bitcoins.
B: I asked them to find Ramsey of 5. I’m still waiting.
L: On that note time to say goodbye until we meet again.
B: Allender/Saks New Jersey again! Take us out Lance.
L: In a complex world, best to keep it simple, and
B/L: HAPPY BIRTHDAY AVI!
Tuesday, October 04, 2016
A Second Order Statement true in (R,+) but not (Q,+)
In my last post I asked
Is there a first order statement true in (R,+) but false in (Q,+)
Is there a second order statement true in (R,+) but false in (Q,+)
First order: No.
One reader said it followed from the Lowenheim-Skolem Theorem. I can believe this but don't see how.
One reader said the theory of (Q,+) is complete. True, but would like a reference.
I would prove it by a Duplicator-Spoiler argument.
Second order: Yes
Some readers thought I had < in my language. I do not so I think that examples using < do not work- unless there is someway to express < in my language.
Some readers thought that second order meant you could quantify over functions. My intent was just to be able to quantify over sets.
Some had the right answers. The one I had in mind was
There exists A,B such that A,B are subgroups with at least two elements that only intersect at 0.
Here is a writeup.
Subscribe to:
Posts (Atom)