Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Saturday, September 27, 2014
MikeFest
I rarely highlight individual events on the blog, but one's advisor only turns sixty once. We will honor Michael Sipser at MIT on Sunday, October 26 with a one-day symposium full of great speakers in complexity. As one of Sipser's PhD students, I'm helping to organize the event and serving as emcee for the dinner speeches. Please send me any funny or embarrassing stories or pictures of Sipser through the decades.
Many of you know Sipser from his popular textbook Introduction to the Theory of Computation. Sipser was one of the driving forces in complexity in the 70's and 80's. Sipser initiated the program of using circuit complexity to separate complexity classes and, with Merrick Furst and James Saxe, showed parity could not be computed by poly-size constant-depth circuits. His research includes how to simulate randomness by alternation, was the first to explore hardness vs randomness, made great advances in the complexity of interactive proofs, and much more. Sipser led the MIT Mathematics department for the last ten years and was recently appointed Dean of Science.
Join us in Cambridge for this great day to celebrate an extraordinary man.
---------------
P.S. STOC call posted, Deadline November 4
Wednesday, September 24, 2014
Typecasting in Dagstuhl
After this pre-recorded typecast, we learned of the tragic death of Alexey Chervonenkis, the C of VC dimension, a huge loss to the learning community. We’ll have a proper obit soon. Now onto the typecast.
Lance: Hello and welcome to Dagstuhl for our first typecast since the 2014 Complexity Conference. Howdy Bill.
Bill: Hi Lance. Are you enjoying Dagstuhl?
Lance: I always have fun at Dagstuhl especially when you are here Bill.
Bill: I have not seen you at many talks.
Lance: So maybe you should go to more talks Bill.
Lance: Something we discussed many times.
Bill: How about a slightly different idea? At the end of this year you will have had FIVE lists of TEN best theorems. (Doing math in his head) That’s FIFTY theorems. There’s a book with a unified theme.
Lance: And I’m glad you’re going to write it.
Bill: That’s not exactly what I had in mind. But I’m happy to help you write it?
Lance: Do you think there are people who would want to buy this book?
Bill: I need your help BLOG AUDIENCE. Leave a comment to say if you would read this book. Would you read the book if you have to pay for it?
Lance: I certainly wouldn’t.
Bill: You don’t count. But they (points to the audience) do. [Bill leaves to get ice cream and comes back] I’m sure it will sell well in Silicon Valley.
Lance: Speaking of Silicon Valley, that was one tough post to write on MSR-SVC, basically an obituary post for a research lab.
Bill: Isn’t rather grim calling it an obituary?
Lance: Exactly.
Bill: Do you always give one word answers?
Lance: No.
Bill: You are man of few words.
Lance: You are a man of a few words too many.
Bill: Yes, I like to keep conversations flowing.
Lance: Indeed you are of the few extroverts in complexity. Introverts like me think deeply of what to say before we say it.
Bill: Did you just insult me? How did an introvert like you become a department chair?
Lance: I fake it well. [Quickly changing topic] I hear there’s exciting news out of Maryland. And I’m not talking about the Orioles.
Lance: Because there’s no innovation in computer science. Brendan who?
Bill: He co-founded Oculus which was sold to Facebook for Ackerman of O(1) dollars.
Lance: Sounds exciting. It is one pretty ugly building you are in now.
Bill: Moving on, how the Complexity Conference in Vancouver?
Bill: [Reading blog post] Wow, no best paper and only 66 participants. Seems a bit lower than last year.
Lance: We were correlated with STOC last year and next year at FCRC as well. Though not with the IEEE anymore.
Bill: Is complexity theory dying?
Lance: The talks at this Dagstuhl alone prove otherwise.
Bill: I particularly liked David Zuckerman’s talk about using statistical sum-product theorems to create non-malleable codes. Why is it so empty in here?
Lance: It’s a rare sunny day at Dagstuhl and we’re inside doing this typecast. What other topics are exciting you at Dagstuhl?
Bill: There’s a resurgence of interest in VP and VNP, Valiant’s algebraic analogues of P and NP and genuine optimism that VP <> VNP might be provable in the near future.
Lance: There is some great work there but let’s wrap this up while we have still have some daylight.
Bill: You know what to say Lance.
Lance: In a complex world, best to keep it simple.
Monday, September 22, 2014
Goodbye MSR-SVC
This week I'm back at Dagstuhl for the Workshop on Algebra in Complexity Theory. Bill is here as well and we hope to have a typecast for you later this week.
The big discussion is the closing of Microsoft Research in Silicon Valley last week. The 50 researchers at MSR-SVC included 15 in a strong theory group. Luckily I captured the page last night as Microsoft has eliminated all mention of the lab from its web site. Just like the novel 1984: Microsoft doesn't have a research lab in Silicon Valley. Microsoft never had a research lab in Silicon Valley.
I visited MSR-SVC a couple of times, once inspiring a 2005 blog post on The New Research Labs. Cynthia Dwork was just starting to think about differential privacy. Jason Hartline, then a researcher at SVC, would later help me grow theory at Northwestern. In 2008 I took a trip there with Northwestern economist Mark Satterthwaite talking on how to connect CS and economics.
Omer Reingold, a favorite theorem author, writes his farewell to MSR. Sergey Yekhanin was supposed to be at Dagstuhl this week but unfortunately cancelled after getting the news. There have been rumors of changes in Microsoft Research since Satya Nadella took over as CEO but the suddenness of the closure of MSR-SVC took everyone by surprise. Computer scientists sent out on the streets well-off the usual hiring cycle. Many other Bay Area institutions will try to help in the short term and I would hope these researchers will find a new permanent home by the next academic year. Luca and Michael also chime in.
Industrial labs come and go but we should remember their legacy. Even as the scientists move on, the research they produce always remain part of our discipline.
The big discussion is the closing of Microsoft Research in Silicon Valley last week. The 50 researchers at MSR-SVC included 15 in a strong theory group. Luckily I captured the page last night as Microsoft has eliminated all mention of the lab from its web site. Just like the novel 1984: Microsoft doesn't have a research lab in Silicon Valley. Microsoft never had a research lab in Silicon Valley.
I visited MSR-SVC a couple of times, once inspiring a 2005 blog post on The New Research Labs. Cynthia Dwork was just starting to think about differential privacy. Jason Hartline, then a researcher at SVC, would later help me grow theory at Northwestern. In 2008 I took a trip there with Northwestern economist Mark Satterthwaite talking on how to connect CS and economics.
Omer Reingold, a favorite theorem author, writes his farewell to MSR. Sergey Yekhanin was supposed to be at Dagstuhl this week but unfortunately cancelled after getting the news. There have been rumors of changes in Microsoft Research since Satya Nadella took over as CEO but the suddenness of the closure of MSR-SVC took everyone by surprise. Computer scientists sent out on the streets well-off the usual hiring cycle. Many other Bay Area institutions will try to help in the short term and I would hope these researchers will find a new permanent home by the next academic year. Luca and Michael also chime in.
Industrial labs come and go but we should remember their legacy. Even as the scientists move on, the research they produce always remain part of our discipline.
Thursday, September 18, 2014
Gentry and Lurie and Zhang can say they are geniuses without bragin'- MacArthur Geniuses
(If I mispell anything in this post, well, that"s why I'm not a MacArthur Genius, or any other kind of Genius.)
Craig Gentry, of homomorphic encryption fame, won a 2014 MacArthur Genius award. here is an article about it and a description of his work, which is understandable to most non-genius's. It is great work and I am glad to see the work and him honored.
Have other computer scientists won it? Yes. Have other CS theorists won it? Yes. Here is a list, though it may be incomplete: Peter Shor (1999), Erik Demaine (2003), Jon Kleinberg (2005), Daniel Spielman (2012).
Jacob Lurie who does very abstract mathematics, also won a 2014 MacArthur genius award. He won the Breakthrough prize earlier (3 million dollars) and the MacArthur (650,000) so he owes me 3650 lunches (I mentored him in HS and hence get a free lunch for every 1000 he wins). Here is an article about it and a description of his work which is not understandable even to most math genius's. Its not the articles fault- its just very hard to describe very hard and deep mathematics unless you can relate it to a very concrete thing like crypto (or other applications) or some things in number theory- you can at least tell someone the statement of Fermat's last theorem. I am sure its great work--- he seems to be generalizing math to an unprecedented degree. Note that the generality does pay off to solve real problems, for example this paper.
Yitang Zhang, who proved that there is a constant c such that infinitely often there are primes that are c apart (he proved c ≤ 70,000,000 but its been gotten down to 246 - see here) also won the 2014 MacArthur genius award. While the proof is hard the result can be explained to anyone. See here for an article about his prize and his result.
Have other mathematicans won it? Yes, around 31 total including the two this year.Two that I will note- Terry Tao and Andrew Wiles.
Craig Gentry, of homomorphic encryption fame, won a 2014 MacArthur Genius award. here is an article about it and a description of his work, which is understandable to most non-genius's. It is great work and I am glad to see the work and him honored.
Have other computer scientists won it? Yes. Have other CS theorists won it? Yes. Here is a list, though it may be incomplete: Peter Shor (1999), Erik Demaine (2003), Jon Kleinberg (2005), Daniel Spielman (2012).
Jacob Lurie who does very abstract mathematics, also won a 2014 MacArthur genius award. He won the Breakthrough prize earlier (3 million dollars) and the MacArthur (650,000) so he owes me 3650 lunches (I mentored him in HS and hence get a free lunch for every 1000 he wins). Here is an article about it and a description of his work which is not understandable even to most math genius's. Its not the articles fault- its just very hard to describe very hard and deep mathematics unless you can relate it to a very concrete thing like crypto (or other applications) or some things in number theory- you can at least tell someone the statement of Fermat's last theorem. I am sure its great work--- he seems to be generalizing math to an unprecedented degree. Note that the generality does pay off to solve real problems, for example this paper.
Yitang Zhang, who proved that there is a constant c such that infinitely often there are primes that are c apart (he proved c ≤ 70,000,000 but its been gotten down to 246 - see here) also won the 2014 MacArthur genius award. While the proof is hard the result can be explained to anyone. See here for an article about his prize and his result.
Have other mathematicans won it? Yes, around 31 total including the two this year.Two that I will note- Terry Tao and Andrew Wiles.
Tuesday, September 16, 2014
Maryland Theory Day October 10!
Univ of Maryland at College Park is having a Theory Day
Friday October 10.
Free Registration and Free Lunch! (there are no economists coming to tell us there is no such thing).
For Information and Registration goto here
A good way to learn lots of current theory in a short time.
Schedule:
8:30-9:00 Light Breakfast and Intro Remarks
9:00-9:20 Gasarch, UMCP
NIM with Cash
9:25-9:45 Mount, UMCP
A New Algorithm for Approximating the Euclidean Minimum Spanning Tree
9:50-10:10 Samir, UMCP:
To do or not to do: scheduling to minimize energy
10:20-11:00 Coffee Break
11:00-12:00 Distinguished Invited Speaker Avrim Blum, CMU
Reconstructing preferences and priorities from opaque transactions
12:00-1:00 Catered Lunch
1:00-2:00 Distinguished Invited Speaker Sanjeev Arora, Princeton
Overcoming the intractability bottleneck in unsupervised learning.
2:00-2:30 Coffee Break
2:30-2:50 Elaine Shi, UMCP
Circuit ORAM and Tightness of the Goldreich-Ostrovksy bound
2:55-3:15 David Harris, UMCP
The Moser-Tardos Framework with Partial Resampling
3:20-3:40 Mohammad Hajiaghayi, UMCP
Streaming Algorithms for Estimating the Matching Size in Planar Graphs and Beyond
3:45-4:05 Michael Dinitz, JHU
Explicit Expanding Expanders
4:10-5:00 Poster Session in 2120 (Grad Students)
Monday, September 15, 2014
Richard Lipton Wins Knuth Prize
Georgia Tech professor and fellow blogger Richard Lipton will receive the 2014 Knuth Prize at the upcoming FOCS conference in Philadelphia. The Knuth Prize is given jointly by the ACM SIGACT and the IEEE TC-MFCS for major research accomplishments and contributions to the foundations of computer science over an extended period of time.
Lipton's research has major results across a large spectrum of theoretical computer science from probabilistic algorithms to DNA computing to communication complexity. I'd like to highlight a couple of his papers in computational complexity hugely influential, including much of my own research.
Richard Karp and Lipton showed that if NP has non-uniform polynomial-size circuits then the polynomial-time hierarchy collapses. The result, and its successors, are a powerful tool, used to show a number of interesting hypotheses are not likely to happen, and plays an important role itself in circuit lower bounds and pseudorandomness. Most importantly Karp-Lipton showed gave the strongest evidence that NP does not have small circuits, justifying the circuit lower bound approach to separating complexity classes.
In lesser known but perhaps even more influential work, Lipton developed a notion of program testing and showed how to test the permanent function, a result that directly led to the surprising power of interactive proofs. This algebraic characterization of hard problems led us to IP = PSPACE, MIP = NEXP and the PCP theorem.
Again this just covers a sliver of his impressive canon of research. Congrats Dick!
Lipton's research has major results across a large spectrum of theoretical computer science from probabilistic algorithms to DNA computing to communication complexity. I'd like to highlight a couple of his papers in computational complexity hugely influential, including much of my own research.
Richard Karp and Lipton showed that if NP has non-uniform polynomial-size circuits then the polynomial-time hierarchy collapses. The result, and its successors, are a powerful tool, used to show a number of interesting hypotheses are not likely to happen, and plays an important role itself in circuit lower bounds and pseudorandomness. Most importantly Karp-Lipton showed gave the strongest evidence that NP does not have small circuits, justifying the circuit lower bound approach to separating complexity classes.
In lesser known but perhaps even more influential work, Lipton developed a notion of program testing and showed how to test the permanent function, a result that directly led to the surprising power of interactive proofs. This algebraic characterization of hard problems led us to IP = PSPACE, MIP = NEXP and the PCP theorem.
Again this just covers a sliver of his impressive canon of research. Congrats Dick!
Thursday, September 11, 2014
Beyond the Commodity
Back in 2005 I lamented the fact that students viewed computers as a commodity, a tool they use, like an automobile, but have no reason to understand how or why it works. In 2011 I noticed a change, that computers like IBM's Watson were starting to make computer science cool again.
Now we are in the midst of yet another major change, reflected in refound interest in high school computer science, and the huge enrollment growth in universities, particularly in non-majors taking upper-level CS courses. Jobs certainly drive much of this enrollment but for an important reason. Basic computer science principles and reasoning have become a critical tool in almost any business. Every large company tries to glean knowledge from data, deal with security and privacy challenges, and solves big optimization questions in an ever complex environment. I've been told that car companies will take as many Mechanical Engineering major with CS minors as Georgia Tech can produce. For what are cars today but computers on wheels.
We've been down this path before, and trends that seem to be with us forever die out leading to computer science disillusionment. Somehow this seems different, but we'll just have to wait and see.
Now we are in the midst of yet another major change, reflected in refound interest in high school computer science, and the huge enrollment growth in universities, particularly in non-majors taking upper-level CS courses. Jobs certainly drive much of this enrollment but for an important reason. Basic computer science principles and reasoning have become a critical tool in almost any business. Every large company tries to glean knowledge from data, deal with security and privacy challenges, and solves big optimization questions in an ever complex environment. I've been told that car companies will take as many Mechanical Engineering major with CS minors as Georgia Tech can produce. For what are cars today but computers on wheels.
We've been down this path before, and trends that seem to be with us forever die out leading to computer science disillusionment. Somehow this seems different, but we'll just have to wait and see.
Monday, September 08, 2014
A Statistical oddity ?
I keep a list of people that are famous-to-me that are old so that if someone dies I won't be surprised. When Lauren Bacall died recently I (1) knew who she was, AND (2) knew she wasn't already dead. I DO NOT look at lists of celebs. My list is organic- if I think of someone who seems old (`GEE, I wonder if that famous probabilist Monty Hall is still alive? He is! He's 92.) I look it up and if they are over 80, they go on the list. Most people are surprised to know that Dorris Day is still alive.
Okay, so what of it? Bill has another weird hobby. (Add this to collecting satires, collecting papers that apply Ramsey Theory, and writing a satire of papers that apply Ramsey theory).
I decided to see how many people on my list had the same birthday and see if it was reasonable with regard to probability (the birthday paradox and all that). The list currently has 70 people.
What I found was probably reasonable in one respect and odd in another.
REASONABLE: Nine pairs had the same birthday. One triple had the same birthday.
ODD: There were NO pairs or triples of same birthdays in July, September, October, November, or December.
I leave as an exercise: How reasonable is what I called reasonable and how odd is what I called odd?
Okay, so what of it? Bill has another weird hobby. (Add this to collecting satires, collecting papers that apply Ramsey Theory, and writing a satire of papers that apply Ramsey theory).
I decided to see how many people on my list had the same birthday and see if it was reasonable with regard to probability (the birthday paradox and all that). The list currently has 70 people.
What I found was probably reasonable in one respect and odd in another.
REASONABLE: Nine pairs had the same birthday. One triple had the same birthday.
ODD: There were NO pairs or triples of same birthdays in July, September, October, November, or December.
I leave as an exercise: How reasonable is what I called reasonable and how odd is what I called odd?
Thursday, September 04, 2014
Favorite Theorems: Quantum Interactive Proofs
Practical quantum computing still has many hurdles to jump through, but the quantum computing model does generate great complexity questions and often surprising answers.
QIP ⊇ PSPACE follows from IP = PSPACE. In an earlier paper, Kitaev and Watrous show QIP ⊆ EXP by reducing QIP to an exponential-sized semi-definite program. This papers applies a clever matrix multiplicative weight algorithm to approximate a subclass of SDPs to achieve QIP ⊆ PSPACE.
We've also seen progress on QMIP, quantum interactive proof with multiple entangled provers who cannot otherwise communicate. QMIP containing MIP=NEXP remained open for a long time because the provers might use entanglement to cheat. Ito and Vidick show that entangled provers can't get an advantage on the multilinear test used in the original MIP=NEXP paper, and thus QMIP does contain NEXP. QMIP contained in NEXP remains open.
QIP = PSPACE by Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay and John Watrous.QIP is the quantum analogue of interactive proof systems. Since IP = PSPACE we get the consequence QIP = IP, that quantum doesn't give an advantage over classical randomness in the interactive proof model. I wouldn't read too much into that interpretation, more that we have a strange situation where IP is far more powerful than we initially suspected and that QIP is weaker than expected and so we get the collision at PSPACE.
QIP ⊇ PSPACE follows from IP = PSPACE. In an earlier paper, Kitaev and Watrous show QIP ⊆ EXP by reducing QIP to an exponential-sized semi-definite program. This papers applies a clever matrix multiplicative weight algorithm to approximate a subclass of SDPs to achieve QIP ⊆ PSPACE.
We've also seen progress on QMIP, quantum interactive proof with multiple entangled provers who cannot otherwise communicate. QMIP containing MIP=NEXP remained open for a long time because the provers might use entanglement to cheat. Ito and Vidick show that entangled provers can't get an advantage on the multilinear test used in the original MIP=NEXP paper, and thus QMIP does contain NEXP. QMIP contained in NEXP remains open.
Tuesday, September 02, 2014
How hard is changing fields? Ask Sheldon!
In the last season of The Big Band Theory Sheldon wants to change field from String theory to something else (I don't recall if he settled on what it would be, though Standard Model Physics, Quantum Loop Gravity, Calculation of nuclear matrix elements, were mentioned negatively, and Geology is, according to Sheldon, not a real science.)
Sheldon faced opposition from his department. And since Physics is... hard... changing fields seems hard.
How hard is it to change fields, both intellectually and in terms of your dept?
Sheldon faced opposition from his department. And since Physics is... hard... changing fields seems hard.
How hard is it to change fields, both intellectually and in terms of your dept?
- If you are hired as a string theorist and you are one of the only ones in your dept, your dept may quite reasonably ask you to still teach string theory. I think this is fine.
- Math and Physics are far older than CS so to change fields requires learning more background knowledge. In CS it was easier about 20 years ago, but CS has grown so much that I suspect it would be harder now.
- There may be a time when you have less papers and grants as you are making the transition. Hence it may be unwise to do this before you get Tenure.
- If your dept hired you to do String Theory and you change to Calculation of nuclear Matrix elements should they mind that? I would think that as long as it's still good work they wouldn't. And they should give you enough time to get grants and papers in it. If you change to a field they don't care about, or change to a field not in the area they might not like that. If Sheldon went into Computational Complexity then would his dept (physics) be justified in not liking that? If he solved P vs NP then all would be forgiven.
- Perhaps the further away you change from your field the better your work has to be before your dept doesn't mind. This could be modelled by a formula. Maybe Sheldon will change to computational dept dynamics and derive it for us.
- The obvious thing to say is Depts should allow their professors to wander free as a bird and not erect arbitrary walls since the best work comes from people not being constrained. I would like to think this is true but I wonder--- how many people have changed fields and ended up doing great work? good work? totally failed?
Thursday, August 28, 2014
Sixteen Years in the Making
Every paper has a story but Sunny Daniel's Arxiv paper from yesterday deserves a blog post.
We begin in 1982 when Ravi Kannan proved that Σ2 (the problems computable in NP with an NP oracle) cannot have n2 size circuits. Kannan's result hold for nk-size circuits but for this story we'll keep it simple.
Kannan had an ingeniously simple proof. By diagonalization you can create a language L in Σ4 that does not have n2-size circuits. Now there are two cases:
We begin in 1982 when Ravi Kannan proved that Σ2 (the problems computable in NP with an NP oracle) cannot have n2 size circuits. Kannan's result hold for nk-size circuits but for this story we'll keep it simple.
Kannan had an ingeniously simple proof. By diagonalization you can create a language L in Σ4 that does not have n2-size circuits. Now there are two cases:
- SAT doesn't have n2-size circuits. Since SAT is in Σ2 we are done.
- SAT has n2-size circuits. Then by Karp-Lipton Σ4 = Σ2 so L is in Σ2 and we are done.
Kannan's proof is non-constructive and doesn't give an explicit Σ2 language that we can show doesn't have n2-size circuits. Either SAT or L but one can't be sure which one.
In 1998, Sunny Daniels, a PhD student at Rutgers, took Eric Allender's complexity course. Eric offered up a constructive example of Kannan as an open problem. Sunny came up with a solution. He wrote up a draft in LaTeX but for personal reasons dropped out of academics and never published the paper.
In 2003, Jin-Yi Cai and Osamu Watanabe, not aware of Daniels' paper, came up with their own independent construction and presented their paper at the COCOON conference in Montana. Word got back to Sunny but he thought he had lost the LaTeX file and didn't want to retypeset the whole proof.
Sunny had Iomega Zip Drive cartridges from his Rutgers days. Recently he found someone who had a Zip Drive reader and managed to recover the files. In there he discovered the original LaTeX, cleaned the paper up, and sixteen years after his proof put the paper on ArXiv. Even if you don't care about the math, read the introduction for the complete version of this story.
Kannan's proof actually shows Σ2∩Π2 does not have n2-size circuits and this was later improved to S2P. Whether we have any constructive language in Σ2∩Π2 or S2P without n2-size circuits still remains open.
Monday, August 25, 2014
A Deadly Side of Complexity
Better algorithms can lead to better medicine and save lives. Just today Tim Gowers discusses Emmanuel Candès' ICM Plenary Lecture, which among other things describes how Candès' work on compressed sensing leads to shorter MRI scans for children, greatly reducing the risk of oxygen deprivation. Prove P = NP with a practical algorithm, and you'll conquer that worst of our diseases. Sounds great until you realize what we can't do.
I was talking to a cancer researcher recently and he points out that many of their challenges are indeed algorithmic. But he also brings up the contrapositive. Since we don't have great algorithms now, we don't know how to make sense of DNA sequences and in particular don't know how to map genetic markers to an appropriate cancer treatment. He works with cancer patients, knowing he can't give them the best possible treatment, not because of lack of data, but due to lack of ways to analyze that data. People die because we don't have the ability to break through the complexity of these algorithmic challenges.
Thursday, August 21, 2014
Turing's Oracle
He called it the "oracle". But in his PhD thesis of 1938, Alan Turing specified no further what shape it might take...Turing has shown with his universal machine that any regular computer would have inescapable limitations. With the oracle, he showed how you might smash through them.This is a fundamental misinterpretation of Turing's oracle model. Here is what Turing said in his paper Systems of Logic Based on Ordinals, Section 4.
Let us suppose we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of the oracle apart from saying it cannot be a machine. (emphasis mine)The rest of the section defines the oracle model and basically argues that for any oracle O, the halting problem relative to O is not computable relative to O. Turing is arguing here that there is no single hardest problem, there is always something harder.
If you take O to be the usual halting problem then a Turing machine equipped with O can solve the halting problem, just by querying the oracle. But that doesn't mean that you have some machine that solves the halting problem for, as Turing has so eloquently argued in Section 9 of his On Computable Numbers, no machine can compute such an O. Turing created the oracle model, not because he thought it would lead to a process that would solve the halting problem, but because it allowed him to show there are problems even more difficult.
Turing's oracle model, like so much of his work, has played a major role in both computability and computational complexity theory. But one shouldn't twist this model to think the oracle could lead to machines that solve non-computable problems and it is sacrilege to suggest that Turing himself would think that.
Monday, August 18, 2014
Complexity versus Algorithms: The FOCS Challenge
In recent years, I've heard complaints from my complexity colleagues that FOCS and STOC are mostly algorithms and from the algorithm buddies that STOC and FOCS are mostly complexity. What exactly counts as a complexity or algorithms paper has become quite blurred in recent years. So let's try an experiment. Below is a poll I've created using titles from the upcoming FOCS conference. Which of these papers do you consider complexity? Does complexity in the title make them a complexity paper?
If you are interested, you can find the manuscripts for most of these papers on the FOCS accepted papers list.
Disclaimer: This is a completely non-scientific poll solely for the interest of the readers of this blog. The results will have no effect on future conference papers.
If you are interested, you can find the manuscripts for most of these papers on the FOCS accepted papers list.
survey tools
Disclaimer: This is a completely non-scientific poll solely for the interest of the readers of this blog. The results will have no effect on future conference papers.
Thursday, August 14, 2014
Favorite Theorems: Limited Independence
When can limited randomness act as well as true random bits?
Braverman shows that any polylogarithmic independent distribution fools polynomial-size constant-depth circuits (AC0), or more precisely for a size s depth d circuit C, for k=(log (m/ε))O(d2), the probability that C will output 1 with uniformly-random inputs will differ at most ε from the probability C outputs 1 from inputs chosen from a k-wise random distribution. Braverman's result followed after Bazzi and Razborov proved a similar result for depth-2 circuits (CNF formulas).
Another nice result along these lines: Ilias Diakonikolas, Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio and Emanuele Viola show that Bounded Independence Fools Halfspaces. A half-space is just a weighted threshold function (is the sum of wixi at most some given θ). Diakonikolas et al. show that one can fool halfspaces with k=O(ε-2log2(1/ε)), in particular for constant ε, k is a constant independent of the number of variables.
Polylogarithmic independence fools AC0 circuits by Mark Braverman (JACM 2010)To explain this result consider choosing uniformly from among the following four strings:
000 110 101 011
If we look at any two of the bits, say the first and third, all four possibilities 00 10 11 01 occur. The sequence is thus 2-wise independent. We can get 2-wise independence using only two random bits to choose one of the four strings. In general one can get k-wise independent in n-bit strings using O(k2 log n) random bits.Braverman shows that any polylogarithmic independent distribution fools polynomial-size constant-depth circuits (AC0), or more precisely for a size s depth d circuit C, for k=(log (m/ε))O(d2), the probability that C will output 1 with uniformly-random inputs will differ at most ε from the probability C outputs 1 from inputs chosen from a k-wise random distribution. Braverman's result followed after Bazzi and Razborov proved a similar result for depth-2 circuits (CNF formulas).
Another nice result along these lines: Ilias Diakonikolas, Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio and Emanuele Viola show that Bounded Independence Fools Halfspaces. A half-space is just a weighted threshold function (is the sum of wixi at most some given θ). Diakonikolas et al. show that one can fool halfspaces with k=O(ε-2log2(1/ε)), in particular for constant ε, k is a constant independent of the number of variables.
Tuesday, August 12, 2014
Subhash Khot wins Nevanlinna
At the opening ceremonies of the International Congress of Mathematicians in 2014, Subhash Khot was awarded the Rolf Nevanlinna Prize, given every four years to an under-40 researcher for outstanding contributions in Mathematical Aspects of Information Sciences. Subhash's citation reads
In other big news, we have our first female Fields Medalist Maryam Mirzakhani for contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces. Still no female winners among the nine Nevanlinna winners. Artur Avila, Manjul Bhargava and Martin Hairer also received Fields medalists. Stanley Osher won the Gauss Prize, Phillip Griffiths the Chern Medal and Adrián Paenza the Leelavati Prize.
Pictures and press releases and citations of all the prize winners.
Khot's work has indeed generated a large research agenda over the last decade. I highlighted his work in March's favorite theorems post.Subhash Khot is awarded the Nevanlinna Prize for his prescient definition of the “Unique Games” problem, and leading the effort to understand its complexity and its pivotal role in the study of efficient approximation of optimization problems; his work has led to breakthroughs in algorithmic design and approximation hardness, and to new exciting interactions between computational complexity, analysis and geometry.
In other big news, we have our first female Fields Medalist Maryam Mirzakhani for contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces. Still no female winners among the nine Nevanlinna winners. Artur Avila, Manjul Bhargava and Martin Hairer also received Fields medalists. Stanley Osher won the Gauss Prize, Phillip Griffiths the Chern Medal and Adrián Paenza the Leelavati Prize.
Pictures and press releases and citations of all the prize winners.
Monday, August 11, 2014
Questions that arose teaching High School students crypto
I taught a 3-week, 3-hours-a-day course to High School student titled
Computer Science: A Hands Off Approach.
Given that time constraints and the fact that some already know (say) Java and some don't know any language, this seemed like a good choice.
I decided to teach mostly pre-RSA crypto with the following theme: Alice and Bob want to pass secret messages. How do they do it? I cover Shift, affine, general sub, Vigenere, Matrix, 1-time pad, Diffie-Helman (a highpoint of the course since Alice and Bob don't have to meet in a dark alley). In also did secret sharing with polynomials, error correcting codes (elementary), Huffman codes, and some applications of mod arithmetic.
While teaching this course some points of interest came up. I suspect most are know and I appreciate polite comments telling me so.
Computer Science: A Hands Off Approach.
Given that time constraints and the fact that some already know (say) Java and some don't know any language, this seemed like a good choice.
I decided to teach mostly pre-RSA crypto with the following theme: Alice and Bob want to pass secret messages. How do they do it? I cover Shift, affine, general sub, Vigenere, Matrix, 1-time pad, Diffie-Helman (a highpoint of the course since Alice and Bob don't have to meet in a dark alley). In also did secret sharing with polynomials, error correcting codes (elementary), Huffman codes, and some applications of mod arithmetic.
While teaching this course some points of interest came up. I suspect most are know and I appreciate polite comments telling me so.
- A student suggested this cipher: code a,b,c,...,z into a 100-letter alapahbet and map each letter to a set of symbols that is the size of the freq. For example, if e occurs 9% of the time then map e to 9 letters. Then use those letters at random. This would seem to foil freq analysis? Does it? Has it been used? What are the downsides.
- Many students suggested using Vigenere but instead of having every xth letter be done by a different shift, have it be affine or general. Of course this can be cracked the same way Vig is cracked. But it does raise an interesting point: Which ciphers are used and not used can be the based on when things were discovered. Martians may very well have used some kind of Vig where every xth letter is a different gen sub cipher.
- Wikipedia and other sources say that the Vig cipher was unbroken for 300 years. A student pointed out that it might have been broken but the one who broke it didn't say. Jon Katz (crypto prof at UMCP) can't believe it wasn't broken immediately, but of course hindsight is 20-20.
- (I have commented on this before) A matrix cipher with a 10x10 matrix seems uncrackable using plaintext only. I have made this question rigorous here.
- I made the comment that 1-time pads are not used much (is this even true? Might depend on the definition of ``must'') because getting a perfect source of randomness is hard. During WW II they also would be hard to use because its hard to carry around a billion bits. But now that would not be a problem. Again--if history had been different we may use 1-time pads, or quasi-random ones today!
- I told the students about arithmetic mod n. One of the students really really didn't like that (say) in mod 7 we have 1/3 = 5. He REALLY wants 1/3 to be between 0 and 1. I suspect he didn't care much for discrete logs either. This was a GOOD student- so his objection was not that it was hard.
- For some of the students their first exposure to matrices was matrix codes over mod 26. I hope they can recover from that.
- Most of the students know what logs were, but weren't that comfortable with them. And here I go and teach them discrete logs! I hope they can recover from that.
- I showed the students that there were quadratic equations over mod 12 with more than 2 roots and challenged them to see how many roots they could come for other mods. One of my students ran massive computer tests and found stuff and in the end had a result that didn't need all of his computations: x^2 \equiv 0 mod n^2 has n roots. And I later had on a HW x^a \equiv 0 mod n^a. I am sure none of this is new, but its new to him when he discovered it and of course new to the class when I taught it.
- I taught the class the Baby-Step/Giant-Step Discrete Log algorithm which has sqrt(p) prepocessing an sqrt(p) running time. Its not used because it also takes sqrt(p) space; however, it was good to show them that Discrete Log can be done in sqrt(p) time, much better than p time--- hence Alice and Bob need to pick their parameters larger than they may have thought when doing Diffie-Helman. That night I easily worked out that it can be modified to do p^{2/3} preprocessing (and space) but just p^{1/3} time. HW was p^a prep, p^{1-a} time. One of the students inquired if this algorithm has a name. I then looked over the web but couldn't find it anywhere so I told them to call it The Gasarch Variant of Baby-Step, Giant-Step. I also quickly told them to NOT be impressed--- and this helped me make a point I had made often, that CS is a NEW field, so NEW that one can present new or newish results to HS students. I also made the point that I am sure this variant is KNOWN to anyone who would care, but (1) they may not care since it takes even more space if x is larger than 0.5 and more time if x is less than 1/2 and (2) not everything is no the web. That last point freaked them out more than the quadratic equation mod 12 that had more than two roots.
Thursday, August 07, 2014
The n^{1/3} barrier for 2-server PIR's broken! A lesson for us all!
Recall: PIR stands for Private Information Retrieval. Here is the model: A database is an n-bit string (my wife tells me this is not true). The user wants to find the ith bit without the database knowing what bit the user wants. The user COULD just request ALL n bits. Can the user use less communication? See here for a website of many papers on PIR.
The barrier result was only for a certain type of PIR. It DID cover all the known PIR schemes at the time. But it does not cover this one.
This is how Barrier SHOULD work--- they should not discourage, but they should point to difficulties so that someone can overcome them.
Also note, the new result used the some of the Framework of Woodruff and Yekhanin. So good to unify and obtain new proofs of old results.
- If there is just one copy of the DB and there are no comp. constraints on the computational power of the DB then ANY protocol requires n bits comm.
- If there is one copy of the DB then, with some comp constraints, you can do better. I state one result: if quadratic residue is hard then there is a protocol using n^epsilon bits of comm. (Kushilevitz and Ostrovsky). The rest of this post is about the info-theoretic case, so the DB has no comp. constraints.
- If there are two copies of the DB then there is a protocol that uses n^{1/3} bits. (Chor, Kushilevitz, Goldreich, Sudan)
- If there are three copies of the DB then there is a protocl that uses n^{1/32582658} bits. Really! (Yekhanin).
- If a 2-server protocol is a bilinear-group protocol (which all prior constructions were) then it must take n^{1/3}.(Razborov and Yekhanin)
- Most known constructions put into a geometric framework (Woodruff and Yekhanin).
The barrier result was only for a certain type of PIR. It DID cover all the known PIR schemes at the time. But it does not cover this one.
This is how Barrier SHOULD work--- they should not discourage, but they should point to difficulties so that someone can overcome them.
Also note, the new result used the some of the Framework of Woodruff and Yekhanin. So good to unify and obtain new proofs of old results.
Tuesday, August 05, 2014
How I know Moneyball is a true story. Do we clean up theorems and life too much?
A few years ago I saw the movie Moneyball about how the Oakland A's used intelligent statistics to... win?
No, but to do better-than-expected. Even if I didn't know it was a true story I would have assumed it was because the following are very rare or odd in a fictional story:
In academia we do clean things up for a better story line. If the true motivation for working on a problem doesn't really make sense when you see the final paper, we change our motivation. Our original proof is intuitive but ugly, so we change it to be polished but less clear where it came from. Historians often simplfiy to make sense of things. I am NOT complaining- merely asking, do we do it too much?
When I was in ninth grade and was told that you could solve a quadratic equation (I rederived the quadratic formula once a month to make sure I could), a cubic, a a quartic, but not quintic, I immediately said "I want to goto College to learn why you can't solve a quintic" That sparked my interest in math.
Is the above story true? I am sure that in ninth grade I did learn that the quintic was unsolvable and that was of great interest to me, and I really did rederive the quadratic equation once a month. And I was interested to learn that the quintic was not solvable. But I really doubt the story is as clean as presented above. Even so, the story is true in spirit. However, I would not want to push the point.
How about you? Do you tell stories about yourself or about others that are just a little too polished? Not so much false, and not even to put yourself in a better light, but just a little to clean to have really happened.
No, but to do better-than-expected. Even if I didn't know it was a true story I would have assumed it was because the following are very rare or odd in a fictional story:
- At the end of the team doesn't win- it just does better than expected. In the typical sports movie the underdog pulls it all together and wins. In some the underdogs loses but they are now better people or something. In an episode of Cheers where they were the underdog to Gary's Tavern in a bloody mary making contest the Cheers gang cheats and wins. But in Moneyball, and in NO other sports (or contest) movie that I know of, does the underdog do better-than-expected in an undramatic matter. This is NOT a complaint- just note that its real life.
- In Moneyball the General Manager wants to try out mathematical methods and the Manager resists. In most movies its the suits that are wrong and the people on the ground that are right. This is even a theme of many articles about business that I read in magazines on airplanes. So this inversion is odd - but again, you can't argue that a true story is unrealistic or undramatic.
- Bill Beane, the one who wants to use math techniques, thinks that what Baseball scounts look for is the wrong thing. In fact, they misjudged him when he was a young player. But in what direction?--- they thought he was BETTER than he was. If this was a fictional story surely the scouts would think he was WORSE than he was.
In academia we do clean things up for a better story line. If the true motivation for working on a problem doesn't really make sense when you see the final paper, we change our motivation. Our original proof is intuitive but ugly, so we change it to be polished but less clear where it came from. Historians often simplfiy to make sense of things. I am NOT complaining- merely asking, do we do it too much?
When I was in ninth grade and was told that you could solve a quadratic equation (I rederived the quadratic formula once a month to make sure I could), a cubic, a a quartic, but not quintic, I immediately said "I want to goto College to learn why you can't solve a quintic" That sparked my interest in math.
Is the above story true? I am sure that in ninth grade I did learn that the quintic was unsolvable and that was of great interest to me, and I really did rederive the quadratic equation once a month. And I was interested to learn that the quintic was not solvable. But I really doubt the story is as clean as presented above. Even so, the story is true in spirit. However, I would not want to push the point.
How about you? Do you tell stories about yourself or about others that are just a little too polished? Not so much false, and not even to put yourself in a better light, but just a little to clean to have really happened.
Thursday, July 31, 2014
How NOT to bet
(True Story, but names are changed for no good reason.)
Alice, Bob, and Carol are betting on who will be the Republican Nominee in 2016.
One full dollar!
I think I read that you are better off betting AGAINST a triple crown if a horse won the first two legs for the same reason- there will be emotion driving the bet FOR, and that increases the payoff for those betting AGAINST. But of course nothing is guaranteed. And there are some cases where its just ridiculous to vote against (Secretariat in 1973); however, to use my system you can't just skip one since you ``just know'' that there will be a triple crown. Its a long term strategy. And one could be wrong. That's why its called gambling!
Are there other cases where long term betting with facts and against sentiment can give you an edge?
Alice, Bob, and Carol are betting on who will be the Republican Nominee in 2016.
- Alice bets on Jeb Bush. Alice's reasoning is that in the past the Republicans always end up with a moderate who they think can win. NOTE: From Alice's prediction you can tell NOTHING about her politics.
- Bob bets on Paul Ryan. Bob's reasons that in the past the Republicans always end up with someone who they are familiar with, quite often someone who has run before. Normally this might be someone who ran in the primaries four years earlier; however, none of Herman Cain, Michelle Bachman, Rick Santorum, Newt Gingrich, seem likely. Rick Perry is possible but seems unlikely. That leaves Paul Ryan (Curiosity- people who lose, even if its close, do not run again. Not sure why. So it won't be Romney.) NOTE: From Bob's prediction you can tell NOTHING about his politics.
- Carol bets on Rand Paul. Carol's reasoning is that the American Public is tired of Big Government and is ready for the Rand Paul Revolution! NOTE: From Carol's prediction you can tell EVERYTHING about her politics.
One full dollar!
I think I read that you are better off betting AGAINST a triple crown if a horse won the first two legs for the same reason- there will be emotion driving the bet FOR, and that increases the payoff for those betting AGAINST. But of course nothing is guaranteed. And there are some cases where its just ridiculous to vote against (Secretariat in 1973); however, to use my system you can't just skip one since you ``just know'' that there will be a triple crown. Its a long term strategy. And one could be wrong. That's why its called gambling!
Are there other cases where long term betting with facts and against sentiment can give you an edge?
Sunday, July 27, 2014
The Change Problem and the Gap between Recreational and Serious Mathematics
In a prior post (here) I discussed the change problem: given a dollar, how many ways can you make change using pennies, nickels, dimes, and quarters (and generalize to n cents). Scouring the web I found either programs for it, the answer (242), and answers like `is the coefficient of x^100 in ... . What I didn't find is a formula for n cents. So I derived one using recurrences and wrote this up with some other things about the change problem, and I posted it on arXiv. (I have updated the paper many times after comments I got. It is still where it was, but updated, here.)
I then got some very helpful emails pointing me to the vast math literature on this problem. I was not surprised there was such a literature, though I was surprised I hadn't found any of it searching the web.
The literature didn't have my formula, though it had several ways to derive it (some faster than mine).
Why isn't the formula out there? Why couldn't I find the literature?
I then got some very helpful emails pointing me to the vast math literature on this problem. I was not surprised there was such a literature, though I was surprised I hadn't found any of it searching the web.
The literature didn't have my formula, though it had several ways to derive it (some faster than mine).
Why isn't the formula out there? Why couldn't I find the literature?
- The formula falls into a space right between recreational and serious math. I use a recurrence but not a generating function. Recreational math just wants to know the answer for 100, so a program suffices (or some clever by-hand recurrences, which are also in my paper). Serious math people are happy to show that a formula CAN be derived and to refine that in terms of how quickly the coefficients can be found.
- There is a gap between recreational and serious math. In particular- the serious people aren't talking to (or posting on websites to) the rec people.
- Different terminologies. I was looking for ``change problems'' and ``coin problems'' and things like that when I should have been looking for ``Sylvester Denumerant'' or ''Frobenius problem''
Thursday, July 24, 2014
Need some book reviewed- faster than usual
I have been the SIGACT NEWS book review editor since 1997. I have tried to get about 10 books reviewed per column. I have succeeded- perhaps too well! I have gotten so many reviews out that I only have six reviews left.
I DO have many books that could be reviewed, and that is where YOU come in!
List of books available for review: Here
Advice for reviewers: Here
LaTeX Template for reviews: Here
ALL of my prior columns: Here
IF you want to review a book DO NOT leave a comment- just email me (ADDED LATER- EMAIL ME at gasarch@cs.umd.edu. Someone emailed Lance instead
which you shouldn't do.)
a set of books you want to review. I will pick out one for you--- it may be based
on what I've already got requests for.
Since it is summer you are not as busy as the regular hear (hmmm- I am running an REU program AND teaching an intense High School Class, AND going to a security conference, and mentoring three high school students hoping for some more free lunches, so I am actually MORE busy) so summer is a GOOD time to review a book.
Deadline to ASK for a book: Request that you make your request BEFORE July 28 so that when I email in my column with the list-of-books-I-need-reviewed, it is accurate.
Deadline for Review: About two months after you get it, though this can be flexible.
I DO have many books that could be reviewed, and that is where YOU come in!
List of books available for review: Here
Advice for reviewers: Here
LaTeX Template for reviews: Here
ALL of my prior columns: Here
IF you want to review a book DO NOT leave a comment- just email me (ADDED LATER- EMAIL ME at gasarch@cs.umd.edu. Someone emailed Lance instead
which you shouldn't do.)
a set of books you want to review. I will pick out one for you--- it may be based
on what I've already got requests for.
Since it is summer you are not as busy as the regular hear (hmmm- I am running an REU program AND teaching an intense High School Class, AND going to a security conference, and mentoring three high school students hoping for some more free lunches, so I am actually MORE busy) so summer is a GOOD time to review a book.
Deadline to ASK for a book: Request that you make your request BEFORE July 28 so that when I email in my column with the list-of-books-I-need-reviewed, it is accurate.
Deadline for Review: About two months after you get it, though this can be flexible.
Tuesday, July 22, 2014
The Burden of Large Enrollments
This week I'm at the CRA Snowbird conference, the biennial meeting of CS chairs and other leaders in the field. In 2012 many of the discussion focused on MOOCS. This year the challenge facing most CS chairs are booming enrollments in CS courses. A nice problem to have, but a problem nevertheless.
Last night we had a broad discussion about the burgeoning number of students. Ed Lazowska showed his NCWIT slides giving anecdotal evidence. It's too early to get a complete survey of CS departments but hardly anyone in the audience felt that enrollments were not going up by double (or triple) digit percentages. Not only an increase in majors, but a number of non-majors, minors or others, who take upper level courses.
At Georgia Tech it seems every engineering student wants to minor in CS.
We all have to deal with the increase in the short term. Depending on the institution, we have more classes, larger classes, enrollment caps, increases in instructors, PhD lecturers, teaching postdocs, automated grading, undergrad and MS TAs and having other departments take up some of the load. All of this could hurt the undergrad experience but with careful planning we can mitigate those effects.
What's driving the increase? Some suggested a change in the ubiquitous view of computing and the more positive opinion of geeks in society. More likely though, the driver is jobs, the great need for computer scientists and not just from computing companies, and the limited jobs for those graduating with non-technical majors.
Is the big shifts in enrollment a permanent change or just another part of the boom and bust cycle in CS? More than a theoretical questions, a permanent change makes for a better argument with deans and provosts to increase faculty sizes in computer science. There are some signs of "this time is different," such as the increase in non-majors in upper level courses, but it's an argument that will have a hard sell unless the increase is sustained for several years. One good argument: If you draw a linear line through enrollments since the 70's you get a consistent 10% yearly increase averaged over booms and busts.
Last night we had a broad discussion about the burgeoning number of students. Ed Lazowska showed his NCWIT slides giving anecdotal evidence. It's too early to get a complete survey of CS departments but hardly anyone in the audience felt that enrollments were not going up by double (or triple) digit percentages. Not only an increase in majors, but a number of non-majors, minors or others, who take upper level courses.
At Georgia Tech it seems every engineering student wants to minor in CS.
We all have to deal with the increase in the short term. Depending on the institution, we have more classes, larger classes, enrollment caps, increases in instructors, PhD lecturers, teaching postdocs, automated grading, undergrad and MS TAs and having other departments take up some of the load. All of this could hurt the undergrad experience but with careful planning we can mitigate those effects.
What's driving the increase? Some suggested a change in the ubiquitous view of computing and the more positive opinion of geeks in society. More likely though, the driver is jobs, the great need for computer scientists and not just from computing companies, and the limited jobs for those graduating with non-technical majors.
Is the big shifts in enrollment a permanent change or just another part of the boom and bust cycle in CS? More than a theoretical questions, a permanent change makes for a better argument with deans and provosts to increase faculty sizes in computer science. There are some signs of "this time is different," such as the increase in non-majors in upper level courses, but it's an argument that will have a hard sell unless the increase is sustained for several years. One good argument: If you draw a linear line through enrollments since the 70's you get a consistent 10% yearly increase averaged over booms and busts.
Thursday, July 17, 2014
Elfdrive
New York Times, dateline June 11, 2019
With a near record-setting investment announced last week, the self-driving car service Elfdrive is the hottest, most valuable technology start-up on the planet. It is also one of the most controversial.If you haven't guessed all I did was minor edits to a Farhod Manjoo piece on Uber. My point is that it all fits, there really isn't that much difference between a car driven by a magical AI elf or that driving by a real person, if it's all controlled by an app. You can start to see the effects of self-driving cars, the good and the bad, from what's going on now with ride-sharing services. The big difference with self-driving cars will be the complaints won't come just from the taxi drivers, but the truckers and delivery people as well.
The company, which has been the target of protests across Europe this week, has been accused of a reckless attitude toward safety, of price-gouging its customers, of putting existing cabbies out of work and of evading regulation. And it has been called trivial. In The New Yorker last year, George Packer huffed that Elfdrive typified Silicon Valley’s newfound focus on “solving all the problems of being 20 years old, with cash on hand.”
It is impossible to say whether Elfdrive is worth the $117 billion its investors believe it to be; like any start-up, it could fail. But for all its flaws, Elfdrive is anything but trivial. It could well transform transportation the way Amazon has altered shopping — by using slick, user-friendly software and mountains of data to completely reshape an existing market, ultimately making many modes of urban transportation cheaper, more flexible and more widely accessible to people across the income spectrum.
Elfdrive could pull this off by accomplishing something that has long been seen as a pipe dream among transportation scholars: It has the potential to decrease private car ownership.
In its long-established markets, like San Francisco, using Elfdrive every day is already arguably cheaper than owning a private car. Elfdrive says that despite dust-ups about “elfpricing” at busy times, its cheapest service is usually 30 percent less expensive than taxis.
The competition between Elfdrive and its rivals is likely to result in more areas of the country in which self-driving becomes both cheaper and more convenient than owning a car, a shift that could profoundly alter how people navigate American cities.
Over the next few years, if Elfdrive do reduce the need for private vehicle ownership, they could help lower the cost of living in urban areas, reduce the environmental toll exacted by privately owned automobiles (like the emissions we spew while cruising for parking), and reallocate space now being wasted on parking lots to more valuable uses, like housing.
Paradoxically, some experts say, the increased use of self-driving cars could also spawn renewed interest in and funding for public transportation, because people generally use taxis in conjunction with many other forms of transportation.
In other words, if Elfdrive and its competitors succeed, it wouldn’t be a stretch to see many small and midsize cities become transportation nirvanas on the order of Manhattan — places where forgoing car ownership isn’t just an outré lifestyle choice, but the preferred way to live.
Sunday, July 13, 2014
What to call the top and bottom part of (n choose k)
In my last post I asked for candidates for names for the top and bottom part of (n choose k) . Here are the candidates and my comments on them and ... the winner!
While choices 4,8,9 are tempting along those lines, the winner is
Numerator/Denominator
Why? One of the people who suggested it gave a pointer. The pointer actually went to calling the top and bottom part of the Legendre symbol Numerator and Denominator. And thats just it- there are several
other math things that have a top and bottom part. We could try to find a name for each one, OR
just use Numerator and Denominator for all of them. That seems to work. SO- next time you write a paper and need to refer to the top part of a bin coeff or a leg symbol or something else, use the terms
Numerator and Denominator.
CODA: `The Denominator' could be an Arnold Schwarzenegger character. A Math teacher by day, a crimefighter by night. Catchphrase: I'm going to put you under!
- Top part: Degree, Bottom part: Index.
- Top part: Bino, Bottom part: Mial
- Top part: Numerator, Bottom part: Denominator
- Top part: Outcomes, Bottom part: Possibilities
- Top part: Binomerator, Bottom part: I've got nothing
- Top part: *, Bottom part: *
- Top part: Biponendo, Bottom part: Bividendo
- Top part: Choosand, Bottom part: choosee
- Top part: Set size, Bottom part: Subset size.
While choices 4,8,9 are tempting along those lines, the winner is
Numerator/Denominator
Why? One of the people who suggested it gave a pointer. The pointer actually went to calling the top and bottom part of the Legendre symbol Numerator and Denominator. And thats just it- there are several
other math things that have a top and bottom part. We could try to find a name for each one, OR
just use Numerator and Denominator for all of them. That seems to work. SO- next time you write a paper and need to refer to the top part of a bin coeff or a leg symbol or something else, use the terms
Numerator and Denominator.
CODA: `The Denominator' could be an Arnold Schwarzenegger character. A Math teacher by day, a crimefighter by night. Catchphrase: I'm going to put you under!
Wednesday, July 09, 2014
Is there a word for the top part of a binomial coefficient?
Consider the sequence:
x/y, (x+1)/y, (x+2)/y, ..., (x+z)/y
one could say in this sequence of fractions the numerators goes through z-1 consecutive numbers.
Consider the sequence
(x choose y), (x+1 choose y), (x+2 choose y),...,(x+z choose y)
one could say in this sequence of binomial coefficients the top-part goes through z-1 consecutive numbers.
Is there a better way to say this? That is, is there a term for the top-part of a binomial coefficient? Or for that matter the bottom part? I have not been able to find one on the web. Hence I propose a contest:
x/y, (x+1)/y, (x+2)/y, ..., (x+z)/y
one could say in this sequence of fractions the numerators goes through z-1 consecutive numbers.
Consider the sequence
(x choose y), (x+1 choose y), (x+2 choose y),...,(x+z choose y)
one could say in this sequence of binomial coefficients the top-part goes through z-1 consecutive numbers.
Is there a better way to say this? That is, is there a term for the top-part of a binomial coefficient? Or for that matter the bottom part? I have not been able to find one on the web. Hence I propose a contest:
- Leave as a comment a proposed name for the top-part and for the bottom-part of a binomial coefficient.
- If you find a website that has some official name, leave that as a comment.
Monday, July 07, 2014
Favorite Theorems: Compressing the Local Lemma
Not only did Moser give a near optimal construction, but he did so in my favorite STOC talk of the decade.
A Constructive Proof of the Lovász Local Lemma by Robin Moser
The Lovász local lemma informally states that if you have a large set of events with limited dependence and they individually have a reasonable chance of happening, then there is a positive probability that they all happen. Moser focused on the special case of Boolean formula satisfiability: If we have a k-CNF formula φ where each clause shares a variable with at most r other clauses with r<2k/8 then φ is satisfiable. Note the result does not depend on the number of variables or clauses.
Moser gave this simple algorithm and a proof using entropy that I recognized as a Kolmogorov complexity argument and so excited me that I immediately wrote up the Kolmogorov proof as a blog post.
Moser's paper kickstarted a whole research agenda, with tight bounds using even simpler algorithms (though more complex proofs) and various generalizations. A whole tutorial day of STOC 2014 focused on the local lemma describing research that directly or indirectly came from Moser's most beautiful construction.
Moser gave this simple algorithm and a proof using entropy that I recognized as a Kolmogorov complexity argument and so excited me that I immediately wrote up the Kolmogorov proof as a blog post.
Moser's paper kickstarted a whole research agenda, with tight bounds using even simpler algorithms (though more complex proofs) and various generalizations. A whole tutorial day of STOC 2014 focused on the local lemma describing research that directly or indirectly came from Moser's most beautiful construction.
Wednesday, July 02, 2014
Four Centuries of Logarithms
Napier invented logarithms to make calculations like multiplication, division and exponentiation easier, using identities like log(ab)=log(a)+log(b). Logarithmic tables and slide rules came soon thereafter. Slide rules became the standard way to do mathematical computations for your typical scientist/engineer until reasonably priced calculators appeared in the late 70's. My chemistry teacher in 1980 taught us how to use a slide rule, even though we all had calculators, and I even (foolishly) tried using my father's slide rule on a chemistry test.
The exhibit struggled to find current uses for logarithms, mentioning only the Richter scale. In theoretical computer science we use logarithms all the time. Here's an incomplete list off the top of my head.
- As part of a running time usually as a result of divide and conquer. Sorting, for example, takes Θ(n log n) comparisons.
- The size of a number, pointer or counter. To count up to n requires log n bits of storage.
- The representation of a small amount of space as in the complexity classes L and NL.
- To capture entropy, coupon collector and other topics in probability and information.
- Roughly captures the sum of 1/i or exactly capturing the integral of 1/x.
- The inverse of exponential growth.
Thanks John Napier for the logarithm and making our lives just a little less linear.
Sunday, June 29, 2014
3000 lunches
Last week fortune smiled on two of my friends:
$3,000,000 is the largest amount of money for prizes of this sort (the Nobel is a paltry $1,200,000). Even so, I had not heard of the award until there was a math one. I wonder if it will become a household name like the Nobel has.
A few weeks ago I told my student Tucker, who is going to Digipen- a 4-year college with a BIG emphasis on programming computer games- that he is more likely to become a millionaire then my student Sam who is going to CMU to study pure math. I could be wrong.
I taught Jacob lots of recursion theory and he won the Westinghouse Award with a paper on Recursive Surreal Numbers; however, the work was all his own and didn't use much of what I taught him. After that I made a policy that for every $1000 a high school student of mine wins I get a free lunch. I emailed Jacob- and while he's not sure he'll give me 3000 lunches, he will treat me to lunch next time I am in Boston.
To quote the article, he won it for
Foundations of higher category theory and derived algebraic geometry for the classification of fully extended topological quantum field theories; and for providing a moduli-theoretic interpretation of elliptic co-homology.
I know what some of those words mean.
- Ken Regan made the cover of Chess Life for his work on catching chess cheaters. (See here.)
- Jacob Lurie who I mentored when he was a high school student 1993-1995 won $3,000,000 for doing pure math. I am not kidding. See here and here. and here. This is a relatively new award called the breakthrough prize.
$3,000,000 is the largest amount of money for prizes of this sort (the Nobel is a paltry $1,200,000). Even so, I had not heard of the award until there was a math one. I wonder if it will become a household name like the Nobel has.
A few weeks ago I told my student Tucker, who is going to Digipen- a 4-year college with a BIG emphasis on programming computer games- that he is more likely to become a millionaire then my student Sam who is going to CMU to study pure math. I could be wrong.
I taught Jacob lots of recursion theory and he won the Westinghouse Award with a paper on Recursive Surreal Numbers; however, the work was all his own and didn't use much of what I taught him. After that I made a policy that for every $1000 a high school student of mine wins I get a free lunch. I emailed Jacob- and while he's not sure he'll give me 3000 lunches, he will treat me to lunch next time I am in Boston.
To quote the article, he won it for
Foundations of higher category theory and derived algebraic geometry for the classification of fully extended topological quantum field theories; and for providing a moduli-theoretic interpretation of elliptic co-homology.
I know what some of those words mean.
Thursday, June 26, 2014
Computer Dating
My brother went through his old stuff cleaning out his apartment in New York and stumbled upon the 1981 computer dating questionnaire I wrote in high school. Forgive the dated and adolescent nature of the questions.
The computer science teacher showed us an example of a computer dating form created by some company, a new idea at the time. I decided to run computer dating at the high school, created a questionnaire and passed it around. I did this a few years, the 1981 form must have been the last as that's the year I graduated. In those ancient history days, my fellow students filled out these forms on paper and then I would type the answers into the big teletype machines in the computer lab. I wrote a program to create a matching, I have no idea what algorithm I used but I'm sure it was quite simplistic. I always got matched to the same person, but the pickup line "my computer dating program matched us up" didn't go very far. I did hear of one couple having one (and only one) date because of the program. There's a reason I became a theorist.
My daughters tell me the whole scene has changed with traditional dating quite rare these days. "First you become boyfriend/girlfriend and then you date" which seems so backwards to me.
Now we have websites that use sophisticated machine learning algorithms to connect people. I met my wife the old fashioned way--at a wine and cheese party sponsored by a Boston-area Jewish singles group.
Monday, June 23, 2014
How many ways can you make change of n cents?
(For a related post see my post on Schur's theorem. The paper THIS post refers to is here.)
(ADDED LATER: A commenter pointed to Graham-Knuth-Patashnik for a closed form for pennies, nickels, dimes, quarters, half-dollars. I have wriitten up their account and it is available here. I also apply their technique to just get 1,5,10,25.)
How many ways are there to make change of a dollar using pennies, nickels, dimes, and quarters? This is a well known question; however, the answers I found were disappointing. They were of two types
I looked for a closed form on the web but could not find one. So I did it myself. The paper is pointed to above. I did not use generating functions, just recurrences. I do not know if it is new. Whether it is old or new I hope you enjoy it. The paper actually gives a closed form for the coin sets {1,x,kx} and {1,x,kx,rx}. Along the way we derive the answer for a dollar and the usual currency by hand.
This raises some questions:
Side Bar: I asked the 10 students in my REU program to just GUESS how many ways there are to make change of a dollar with pennies, nickels, dimes, and quarters. Most made guesses under 100. The lowest guess was 30. One person guessed 100! and one guessed 100!/2. I then told them that it was between 30 and 100! and assigned them to derive it themselves by hand. Most were able to.
(ADDED LATER: A commenter pointed to Graham-Knuth-Patashnik for a closed form for pennies, nickels, dimes, quarters, half-dollars. I have wriitten up their account and it is available here. I also apply their technique to just get 1,5,10,25.)
How many ways are there to make change of a dollar using pennies, nickels, dimes, and quarters? This is a well known question; however, the answers I found were disappointing. They were of two types
- There are 242 ways to make change. The author then point to a program he wrote or to the actual list of ways to do it.
- The number of ways to make n cents change is the coeff of x^n in the power series for 1/(1-x)(1-x^5)(1-x^{10})(1-x^{25}). This can be worked out. I have never seen it worked out.
I looked for a closed form on the web but could not find one. So I did it myself. The paper is pointed to above. I did not use generating functions, just recurrences. I do not know if it is new. Whether it is old or new I hope you enjoy it. The paper actually gives a closed form for the coin sets {1,x,kx} and {1,x,kx,rx}. Along the way we derive the answer for a dollar and the usual currency by hand.
This raises some questions:
- Is my formula new? I'd be surprised either way. (Is it possible to be surprised at both statement A and statement NOT(A)?). If it's new I'll be surprised since the questions has been around for so long and surely someone would have done it. If it's known I'll be surprised since (a) I went to JSTOR and searched for all math journals they had for the words ``change'' and ``coin'', looked at the over 400 such articles (just the titles and abstracts) and didn't find it, (b) this came up in math.stackexchange here and, true to form, every answer was either a program or a generating function and (c) other searches also did not turn up the result.(ADDED LATER- Since the 1,5,10,25,50 case was known, I guess I AM surprised.)
- Even if its not new it is clearly not well known. Why is that? Its a natural problem that people seemed to be interested in. One conjecture: The COMP SCI people interested in it were happy to write a program. The MATH people interested in it were happy to say `its merely the nth coeff of...' So a closed form solution seems to not be of interest to either camp
- Is it interesting? (a) Comp Sci: the closed form solution gives an answer in O(1) steps instead of the usual dynamic programming O(n) solution, and (b) Math: the closed form solution tells you the coefficients of the power series of 1/((1-x)(a-x^5)(1-x^{10})(1-x^{25}).
Side Bar: I asked the 10 students in my REU program to just GUESS how many ways there are to make change of a dollar with pennies, nickels, dimes, and quarters. Most made guesses under 100. The lowest guess was 30. One person guessed 100! and one guessed 100!/2. I then told them that it was between 30 and 100! and assigned them to derive it themselves by hand. Most were able to.
Subscribe to:
Posts (Atom)