The complexity result of the year goes to The Matching Polytope has Exponential Extension Complexity by Thomas Rothvoss. Last year's paper of the year showed that the Traveling Salesman Problem cannot have a subexponential-size linear program formulation. If one could show that every problem in P has a short polynomial-size LP formulation then we would have a separation of P and NP. Rothvoss' paper shoots down that approach by giving an exponential lower bound for the polynomial-time computable matching problem. This story is reminiscent of the exponential monotone circuit lower bounds first for clique then matching in the 1980's.
If you expand to all of mathematics, one cannot ignore Yitang Zhang's work showing the liminf of the difference between consecutive primes is a constant. Dick and Ken have other great results for the year.
A big year for theoretical computer science. Silvio Micali and Shafi Goldwasser received the ACM Turing Award. The P v NP problem makes a prominent appearance on a major US television series. We are seeing the rise of currencies based on complexity. Large-scale algorithms, cryptography and privacy play center stage in the Snowden revelations on the National Security Agency.
Generally good news on the funding and jobs front in the US. After a year of sequestration and a government shutdown, looks like some stability for science funding now that congress has actually passed a budget. Plenty of theorists got academic jobs last spring and given the number of ads, this year's CS job market should be quite robust as well.
A year for books. Of course my own Golden Ticket as well as Scott Aaronson's Democritus and Tom Cormen's Algorithms Unlocked.
An odd year for the blog in 2013 without a single obituary post. Nevertheless let us remember 4-Colorer Kenneth Appel, Georgia Tech Software Engineering Professor Mary Jean Harrold and Wenqi Huang who led the Institute of Theoretical Computer Sciences at the Huazhong University of Science and Technology. In 2013 we also said goodbye to Alta Vista, Intrade, Google Reader and the CS GRE.
In 2014 we'll have the next installment of My Favorite Ten Complexity Theorems of the Past Decade and the centenaries of George Dantzig and Martin Gardner. Enjoy New Years and keep reading.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, December 30, 2013
Thursday, December 26, 2013
To Write, or Not to Write, That Is the Question
Guest post by Vijay Vazirani
Our field has been blessed with some great books: classics such as Knuth's volumes and Garey & Johnson literally got the field going, and the books of Lovász brought much clarity in difficult, important areas. The new millennium brought a plethora of new books by famous TCS researchers including Arora, Barack, Goldreich, Kleinberg, Nisan and Tardos. With an astonishing 33 short titles in theory, Now Publishers has virtually created an assembly line for producing books.
Even so, when faced with the prospect of writing a new book, I am filled with trepidation and self-doubt: Am I up to the effort needed? Do I have a genuinely new point of view to expound? If so, have I figured out the "right'' format and style for expounding it in the simplest and clearest manner?
The issue arose when I decided to co-teach, with my able postdoc Ruta Mehta, the course Equilibrium Computation in Spring. With her characteristic enthusiasm, Ruta has practically forced me into a long-term commitment of writing a book on this topic. However, knowing what it takes, so far I have managed to resist an unequivocal "yes".
Most people would say it is a losing proposition -- the gains are typically minuscule compared to the toils. So why do so many people write books? I can imagine several reasons, but best to leave this discussion to my esteemed colleagues who are probably yearning for something to ponder on in this holiday season ...
That brings me to another question, "Are some of the books written in this millennium eventually going to be regarded as classics in the style of Knuth or Lovász's books?'' One could look at citation counts or average citations per year to get an "objective'' reading of the situation. However, when one is talking about true class, such crude estimators are clearly not the way to go.
Your thoughts are eagerly sought ...
Our field has been blessed with some great books: classics such as Knuth's volumes and Garey & Johnson literally got the field going, and the books of Lovász brought much clarity in difficult, important areas. The new millennium brought a plethora of new books by famous TCS researchers including Arora, Barack, Goldreich, Kleinberg, Nisan and Tardos. With an astonishing 33 short titles in theory, Now Publishers has virtually created an assembly line for producing books.
Even so, when faced with the prospect of writing a new book, I am filled with trepidation and self-doubt: Am I up to the effort needed? Do I have a genuinely new point of view to expound? If so, have I figured out the "right'' format and style for expounding it in the simplest and clearest manner?
The issue arose when I decided to co-teach, with my able postdoc Ruta Mehta, the course Equilibrium Computation in Spring. With her characteristic enthusiasm, Ruta has practically forced me into a long-term commitment of writing a book on this topic. However, knowing what it takes, so far I have managed to resist an unequivocal "yes".
Most people would say it is a losing proposition -- the gains are typically minuscule compared to the toils. So why do so many people write books? I can imagine several reasons, but best to leave this discussion to my esteemed colleagues who are probably yearning for something to ponder on in this holiday season ...
That brings me to another question, "Are some of the books written in this millennium eventually going to be regarded as classics in the style of Knuth or Lovász's books?'' One could look at citation counts or average citations per year to get an "objective'' reading of the situation. However, when one is talking about true class, such crude estimators are clearly not the way to go.
Your thoughts are eagerly sought ...
Monday, December 23, 2013
Our Journal is 99 44/100 percent pure!!
Sometimes we are asked to evaluate how good a journal or conference formally(Excellent, Very Good, Good, Fair, Better-than-being-poked-by-a-stick,pass the stick). Sometimes we talk about these things informally (ICALP is the European STOC! SODA is as hard to get into as FOCS!, CCC is a topTier topics-conference!)
But is there a way to judge these things formally? And should there be?Be aware of Goodhart's Law:
I COULD rant that we should all be well rounded enough to read outside of our field, but I know how hard that is.
I COULD rant about the dishonesty pointed out in the above links, but that's only part of the problem.
I COULD say What do you think? so I will.
But is there a way to judge these things formally? And should there be?Be aware of Goodhart's Law:
When a measure becomes a target, it ceases to be a good measure.One way to measure how good a journal or conference is impact factor which is based on number-of-citations. Does this work well? Some issues:
- Even with an honest effort these things are hard to do well. People may cite the conference version, the journal version, the arXiv version, or just give a website.
- Even with an honest effort just a few articles can skew the results.
- Its hard to compare cross-fields. I suspect that pure math journals have lower citations rates than biology journals.
- If an author cites himself, should that count?
- The above points were all about HONEST efforts. Could a journal do things to boost its impact factor and then brag about how high their impact factor is? Would they? Alas yes, see this article and this article.
I COULD rant that we should all be well rounded enough to read outside of our field, but I know how hard that is.
I COULD rant about the dishonesty pointed out in the above links, but that's only part of the problem.
I COULD say What do you think? so I will.
Thursday, December 19, 2013
Security Changes
A couple of policy changes recently, one that supposedly enhances privacy and another that could reduce it.
Google has been implementing perfect forward secrecy since 2011 and other major Internet players, such as Facebook and Twitter, have started using perfect forward secrecy in the wake of the Snowden revelations that the NSA has been collecting Internet traffic to these companies.
So what is perfect forward secrecy? Not an easy question to find the answer to on the Internet. The wikipedia article says little. So I asked a couple of our security folks in the department.
The rough idea: We want to communicate several rounds of messages but if the current keys are compromised they can't be used to decrypt earlier messages. A couple of immediate thoughts: This isn't "perfect", you can still discover the earlier messages by breaking the encryption (say if P = NP). Also this isn't that exciting a problem from a theoretical perspective, you can just use a standard public-key protocol and start with fresh private and public keys each round and deleting the old ones. But that isn't very efficient.
One approach to PFS: Have a standard public/private key scheme to set up a session key (used in an AES or similar private key protocol) then run separate Diffie-Hellman schemes for each message. In RSA if you have the factors for N you can decrypt, where in Diffie-Hellman you can keep the same group without compromising security.
Chris Peikert calls this a poor-man's perfect forward security and there are better schemes though a bit more complicated.
On a different front, Google recently announced that images by default would be displayed in gmail messages. The images would not come directly from the sender, which could contain malware that avoids Google's filters, but rather from Google's servers after being downloaded and cleansed by Google.
Downloading an image often tells the sender that the image was read, typically with some id encoded in the filename. So once again we give up privacy for convenience. At least Google gives us the option to turn off the automated displaying.
Google has been implementing perfect forward secrecy since 2011 and other major Internet players, such as Facebook and Twitter, have started using perfect forward secrecy in the wake of the Snowden revelations that the NSA has been collecting Internet traffic to these companies.
So what is perfect forward secrecy? Not an easy question to find the answer to on the Internet. The wikipedia article says little. So I asked a couple of our security folks in the department.
The rough idea: We want to communicate several rounds of messages but if the current keys are compromised they can't be used to decrypt earlier messages. A couple of immediate thoughts: This isn't "perfect", you can still discover the earlier messages by breaking the encryption (say if P = NP). Also this isn't that exciting a problem from a theoretical perspective, you can just use a standard public-key protocol and start with fresh private and public keys each round and deleting the old ones. But that isn't very efficient.
One approach to PFS: Have a standard public/private key scheme to set up a session key (used in an AES or similar private key protocol) then run separate Diffie-Hellman schemes for each message. In RSA if you have the factors for N you can decrypt, where in Diffie-Hellman you can keep the same group without compromising security.
Chris Peikert calls this a poor-man's perfect forward security and there are better schemes though a bit more complicated.
On a different front, Google recently announced that images by default would be displayed in gmail messages. The images would not come directly from the sender, which could contain malware that avoids Google's filters, but rather from Google's servers after being downloaded and cleansed by Google.
Downloading an image often tells the sender that the image was read, typically with some id encoded in the filename. So once again we give up privacy for convenience. At least Google gives us the option to turn off the automated displaying.
Monday, December 16, 2013
Analogs between Quantum Computing and Parallelism
(Jon Katz wanted me to mention this: A wise man once noted that there are fewer quantum algorithms than thereare quantum-algorithms textbooks! But there is still a lot of interest inquantum computation from cademia,government, industry, and the broader public. Univ of MD and NIST have recently formed a
center devoted to quantum computing and involving faculty and researchers from both physics and computer science communities. As part of this they are advertising a posdoctoral fellowship.)
A long time ago people in theory did a lot of work on parallel computing before
there were many parallel computers build. Today people are doing a lot of work on quantum computing before quantum computers are build. What is similar and different here?
While I think he meant there are more quantum algorithms (quantum random walks, quantum simulations, quantum selection-type problems?, quantum number-theory-type-problems?) now than in 2003, I will note that there are also more books on quantum computing now than then- on a cursory look at amazon I think I counted 30, but with the uncertainly principle, its hard to tell. The point is, aside from factoring and of course quantum simulation I wonder if when quantum computers, if they are build, will be able to do much more
BUT SEE NEXT PARAGRAPH. (Every year my students are surprised to find out that quantum computers probably CANNOT solve SAT in poly time.)
ADDED LATER: Comment 6 has a pointer to a survey of MANY quantum algorithms for MANY algebraic problems, and also a pointer to a more recent article on quantum algorithms. I will be delighted if the number of quantum algorithms now exceeds the number of books on quantum computing.
center devoted to quantum computing and involving faculty and researchers from both physics and computer science communities. As part of this they are advertising a posdoctoral fellowship.)
A long time ago people in theory did a lot of work on parallel computing before
there were many parallel computers build. Today people are doing a lot of work on quantum computing before quantum computers are build. What is similar and different here?
- When parallel computers were actually built they were not like PRAM's. Many of the models assumed shared memory which wasn't true. Even so, did the work on PRAMS and other models help the practioners? Directly? Indirectly?
- Are the models of quantum computing studied now helping the practioners? Is the development of quantum computers at too early a stage to even ask this question?
- Even if no quantum computers are ever built the study has been a sucess since some classical problems have been solved using quantum techniques (and I think this will happen more and more). And some interesting math has come out of it. And physicists and others have learned more about quantum mechanics from quantum computing. Could the same thing have been said for parallelism- even if parallel computers had not been built would the study of them have still be useful?
- One big difference- many problems can be parallelized in some form and solved that way (and some cannot be solved any other way). A wise man named Jon Katz referred above to a wise man named Ronald de Wolf who wrote, in a review of 3 books on quanum computing:
A quick survey on amazon.com shows that the number of books on quantum computing (at least 20) is more than 10 times as high as the number of quantum algorithms (2: Shor's and Grover's). (Footnote: Note that this review was written in 2003 so this statement is no longer true.)
While I think he meant there are more quantum algorithms (quantum random walks, quantum simulations, quantum selection-type problems?, quantum number-theory-type-problems?) now than in 2003, I will note that there are also more books on quantum computing now than then- on a cursory look at amazon I think I counted 30, but with the uncertainly principle, its hard to tell. The point is, aside from factoring and of course quantum simulation I wonder if when quantum computers, if they are build, will be able to do much more
BUT SEE NEXT PARAGRAPH. (Every year my students are surprised to find out that quantum computers probably CANNOT solve SAT in poly time.)
ADDED LATER: Comment 6 has a pointer to a survey of MANY quantum algorithms for MANY algebraic problems, and also a pointer to a more recent article on quantum algorithms. I will be delighted if the number of quantum algorithms now exceeds the number of books on quantum computing.
Thursday, December 12, 2013
Approximate Computing
Hadi Esmaeilzadeh is the newest professor in the School of Computer Science at Georgia Tech. Hadi works in computer architecture and did some great work on dark silicon. (Just noticed this is starting to look like a Lipton-style blog post.)
I had Hadi give a lecture in my theory class (the dreaded day-before-Thanksgiving lecture). Hadi talked about his new research directions in approximate computing. Approximate computing is a new paradigm in the architecture community, doesn't even have its own wikipedia entry yet.
When you do arithmetic operations, say addition and multiplication on real numbers, you typically want full precision up to the limits of the bits we use to store those numbers. Suppose you allow some error, say 5%. For logical operations, this would be a disaster giving a very wrong answer. Running various optimization algorithms, like simplex, these error might compound leading to very suboptimal results.
But there are several scenarios where approximate computing might not hurt that much. Processing media, like pictures, sound and video, are not exact anyway and a small error might not degrade the quality successfully. Statistical methods that sample a large space, such as when we analyze big data, still could yield reasonable results using approximate computing.
Why do approximate computing? Because of the architecture--approximate computing can be done often faster and with less power consumption. The tradeoffs may allow approximate computing to handle tasks beyond what we can do with traditional computing.
Approximate computing needs good theoretical models and that's where our community can come it. What's the right way to model the power-speed-accuracy tradeoffs and how can we determine the right computational problems that can take advantage of these tradeoffs. Might have nice connections to learning theory and property testing.
I had Hadi give a lecture in my theory class (the dreaded day-before-Thanksgiving lecture). Hadi talked about his new research directions in approximate computing. Approximate computing is a new paradigm in the architecture community, doesn't even have its own wikipedia entry yet.
When you do arithmetic operations, say addition and multiplication on real numbers, you typically want full precision up to the limits of the bits we use to store those numbers. Suppose you allow some error, say 5%. For logical operations, this would be a disaster giving a very wrong answer. Running various optimization algorithms, like simplex, these error might compound leading to very suboptimal results.
But there are several scenarios where approximate computing might not hurt that much. Processing media, like pictures, sound and video, are not exact anyway and a small error might not degrade the quality successfully. Statistical methods that sample a large space, such as when we analyze big data, still could yield reasonable results using approximate computing.
Why do approximate computing? Because of the architecture--approximate computing can be done often faster and with less power consumption. The tradeoffs may allow approximate computing to handle tasks beyond what we can do with traditional computing.
Approximate computing needs good theoretical models and that's where our community can come it. What's the right way to model the power-speed-accuracy tradeoffs and how can we determine the right computational problems that can take advantage of these tradeoffs. Might have nice connections to learning theory and property testing.
Monday, December 09, 2013
Inventions are Non-Commutative: Amazon vs Amazon
(Tal Rabin, Shubhangi Saraf and Lisa Zhang asked me to remind you to publicize this: the bi-annual Women in theory (WIT workshop), NYC, May 28-30, 2014. Apps due Jan 20, 2014. Go here for all relevant information)
If FAX machines had come out 20 years earlier they would have had far MORE impact.
If FAX machines had come out 20 years later they would have had NO impact since by then
we all had email and scanners and what not. So when an invention comes out matters.
Ask your grandparents what white-out was for corrections on a typewriter (while you are at it ask your grandparents what a typewriter is). If it came out 20 years later then Bette Nesmith would have not made 50 million dollars, her son Michael Nesmith would not have had the spare time to become a musician and there might not be a group called THE MONKEES. Gee, we would have no LAST TRAIN TO CLARKSVILLE. But I digress (from what?).
If Lasik eye surgery came first and glasses later, glasses may have been seen as a great way to avoid surgery!
Amazon is working on a technology that may become obsolete before it really takes off. I am referring to the Amazon Drone Project. The idea is that when your order an item from Amazon a drone will get it to your house (or perhaps wherever you are) VERY FAST - maybe within 30 minutes. This works well for objects under 5 pounds. Say like what Amazon is best know for BOOKS.
But there is an a technology out there that may kill this idea before it gets off the ground. There is a competing company called Amazon which has already developed ways for people to get books on what they call a Kindle, which is like emailing them a book. So there is NO Physical object to be delivered.
Hence the Amazon Drone project may, like the 8-track tape (ask your great grandparents) become obsolete due to technology being developed by Amazon.
I wonder- had Amazon Drone come out 20 years ago would it have had more impact? Would it have made less of a demand for Amazon Kindle?
Amazon Drone may still have an impact by delivering other things. But I wonder how long it will take before 3-d printers make MOST things that Amazon delivers not need to be physical objects.
If FAX machines had come out 20 years earlier they would have had far MORE impact.
If FAX machines had come out 20 years later they would have had NO impact since by then
we all had email and scanners and what not. So when an invention comes out matters.
Ask your grandparents what white-out was for corrections on a typewriter (while you are at it ask your grandparents what a typewriter is). If it came out 20 years later then Bette Nesmith would have not made 50 million dollars, her son Michael Nesmith would not have had the spare time to become a musician and there might not be a group called THE MONKEES. Gee, we would have no LAST TRAIN TO CLARKSVILLE. But I digress (from what?).
If Lasik eye surgery came first and glasses later, glasses may have been seen as a great way to avoid surgery!
Amazon is working on a technology that may become obsolete before it really takes off. I am referring to the Amazon Drone Project. The idea is that when your order an item from Amazon a drone will get it to your house (or perhaps wherever you are) VERY FAST - maybe within 30 minutes. This works well for objects under 5 pounds. Say like what Amazon is best know for BOOKS.
But there is an a technology out there that may kill this idea before it gets off the ground. There is a competing company called Amazon which has already developed ways for people to get books on what they call a Kindle, which is like emailing them a book. So there is NO Physical object to be delivered.
Hence the Amazon Drone project may, like the 8-track tape (ask your great grandparents) become obsolete due to technology being developed by Amazon.
I wonder- had Amazon Drone come out 20 years ago would it have had more impact? Would it have made less of a demand for Amazon Kindle?
Amazon Drone may still have an impact by delivering other things. But I wonder how long it will take before 3-d printers make MOST things that Amazon delivers not need to be physical objects.
Thursday, December 05, 2013
Bitcoins Revisited
Two years ago I gave a lecture and posted about bitcoin. Of course what I didn't do was buy a bitcoin whose value back then was about $3 and today runs in the $1000 range.
Bitcoins have received quite a bit of press, particularly with the FBI shutting down Silk Road, the drug trafficking site which used bitcoins for their transactions. Then people realized that bitcoins are starting to become a real currency, with a market cap of about US$11 billion not far now from the money supply of the Costa Rica Colones (US$13 billion). Now governments are deciding on how to deal with bitcoins as a currency, one which they really can't regulate or control.
The Economist has one of the better articles on Bitcoins, talking about some of the technical issues involved.
Bitcoins even make it to my Thanksgiving table. My brother thought they were a scam even though I pointed out the systems has no scammers. He remains unconvinced though he invests heavily in gold, which has the same property of the value mostly being there because people believe it has value.
I taught a lecture on bitcoins again in my intro theory course. We all generally agreed that a few years from now we'll all remember the days when bitcoins were worth $1000. Not sure we'll remember those days because bitcoins will be worth millions or because they'll be worth pennies.
Bitcoins have received quite a bit of press, particularly with the FBI shutting down Silk Road, the drug trafficking site which used bitcoins for their transactions. Then people realized that bitcoins are starting to become a real currency, with a market cap of about US$11 billion not far now from the money supply of the Costa Rica Colones (US$13 billion). Now governments are deciding on how to deal with bitcoins as a currency, one which they really can't regulate or control.
The Economist has one of the better articles on Bitcoins, talking about some of the technical issues involved.
The Bitcoin system is designed to cope with the fact that improvements in computer hardware make it cheaper and faster to perform the mathematical operations, known as hashes, involved in mining. Every 2,016 blocks, or roughly every two weeks, the system calculates how long it would take for blocks to be created at precisely 10-minute intervals, and resets a difficulty factor in the calculation accordingly. As equipment gets faster, in short, mining gets harder. But faster equipment is constantly coming online, reducing the potential rewards for other miners unless they, too, buy more kit. Miners have formed groups that pool processing power and parcel out the ensuing rewards. Once done with ordinary computers, mining shifted to graphics-processing units, which can perform some calculations more efficiently. Miners then moved on to flexible chips that can be configured for particular tasks, called field-programmable gate arrays. In the past year, bespoke chips called ASICs (application-specific integrated circuits) have appeared on the scene.Then there was the paper Dorit Ron and Adi Shamir wrote that explored the bitcoin transaction graph and suggested a (now supposedly debunked) connection between the mysterious creator of bitcoins, "Santoshi Nakamoto", and Ross William Ulbricht aka Dread Pirate Roberts, the founder of Silk Road.
Bitcoins even make it to my Thanksgiving table. My brother thought they were a scam even though I pointed out the systems has no scammers. He remains unconvinced though he invests heavily in gold, which has the same property of the value mostly being there because people believe it has value.
I taught a lecture on bitcoins again in my intro theory course. We all generally agreed that a few years from now we'll all remember the days when bitcoins were worth $1000. Not sure we'll remember those days because bitcoins will be worth millions or because they'll be worth pennies.
Monday, December 02, 2013
Global Warming and the Axiom of Choice
Who was the first scientist to warn of Global Warning? These questions are complicated, but I would say it was Bing Crosby in a paper called White Christmas. Here are the first two lines:
Why do otherwise intelligent people refuse to believe that Global Warming is real and is caused by humans and we we need to do something about it? I have a conjecture and an analog. Here is what I think the reasoning is
Are their things in math where people accept an axiom despite its absurd consequences?Yes:
Rather than rethink their AXIOM they accept the absurd conclusion that you can break a ball into 5 pieces, reassemble, and get twice the volume. Fortunately, believing this does not endanger the planet.
I'm dreaming of a white christmasWhy are there no more white christmas's? Because global warming made it stop snowing!
Just like the ones I used to know
Why do otherwise intelligent people refuse to believe that Global Warming is real and is caused by humans and we we need to do something about it? I have a conjecture and an analog. Here is what I think the reasoning is
- Republicans have the following AXIOM (until they are in office): government IS the problem, not the solution. More than this, they think that there is NO problem that requires government action.
- Consequence: Government should do NOTHING about Global Warming.
- Since Government shouldn't do anything about Global Warming, it is not a problem.
Are their things in math where people accept an axiom despite its absurd consequences?Yes:
- Most math people believe the Axiom of Choice.
- Consequence: the Banach-Tarski Paradox
Rather than rethink their AXIOM they accept the absurd conclusion that you can break a ball into 5 pieces, reassemble, and get twice the volume. Fortunately, believing this does not endanger the planet.