We had an incredible year for theorems in 2015, the strongest year in computational complexity in the last decade. Topping the list as the theorem of the year is László Babai's quasipolynomial-time algorithm for graph isomorphism. A little risky because nobody, including Babai himself, has fully verified the proof but given Babai's reputation and the warm reception of his lectures, I have little doubt it will all work out. Again I strongly recommend watching the video of Babai's first lecture. Not only is Babai a great researcher but he's an excellent speaker as well. For the those up to the challenge, the paper is now available as well.
Beyond Babai's result we had a great year including random oracles separating the polynomial-time hierarchy, the (3+1/86)n size lower bound for general circuits, Terry Tao's solution to the Erdős discrepancy problem, explicit constructions of Ramsey Graphs and new bounds on threshold circuits.
We celebrated 50 years of computational complexity and Moore's law, the centenary of Hamming and the bicentenaries of Ada Lovelace and George Boole, the latter giving us the Boolean algebra. In 2016 we will celebrate the centenary of the person who used those bits to describe information.
We thank our guest posters Dylan McKay and Thomas Zeume. Dylan's metric group problem is still open if anyone wants to try and crack it.
In 2015 we continue to see CS departments struggle to meet the demand of students and industry. US universities faced many difficult challenges including dealing with race relations, freedom of speech and safety issues. Too much tragedy in the US and abroad, and a far too divisive political campaign. Here's hoping for reason to prevail in 2016.
We remember Gene Amdahl, Alberto Apostolico, Barry Cooper, George Cox, Jiří Matoušek, John Nash, Leonard Nimoy, Hartley Rogers Jr., Donald Rose, Karsten Schwan and Joe Traub.
Wishing everyone a Happy New Year and a successful 2016.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, December 28, 2015
Tuesday, December 22, 2015
Guns and Crypto
“We believe it would be wrong to weaken security for hundreds of millions of law-abiding customers so that it will also be weaker for the very few who pose a threat,” said a spokesperson from Smith & Wesson on the recent calls for increased gun control.
The quote was actually from Apple on a proposed British law that would require the company to devise methods to break into iPhone communications.
In the wake of Paris and San Bernardino we've heard calls for controls on both guns and cryptography with eerily similar arguments coming from defenders. If we ban guns/crypto then only bad people will have guns/crypto. Any attempts to limit guns/crypto is a slippery slope that takes away our constitutional rights and our freedoms.
I don't have a gun but I do use encryption, built into the iPhone and the various apps and browsers I use to communicate. Fat lot of good it does me as hackers stole my personal information because I shopped at Home Depot and Target. Because my health insurance runs through Anthem. Because I voted in the State of Georgia.
I am a strong believer in individual rights, a person should be able to use cryptography to protect their communications and a gun, if they wish, to protect their family. But I do see the value in gaining access in communications to stop a terrorist as well as making it harder for them to get the weapons to carry out their acts. Why can't the fingerprint technology that unlocks my iPhone also unlock a gun? The gun/crypto advocates don't trust to government to implement any restrictions reasonably and thus fight any changes.
No laws can completely eliminate or even restrict guns or crypto but they can make it harder to use. The challenges aren't technological, we can create guns or crypto protocols that perform as we want them to perform. The challenges are social, finding the right balance between rights and security and governments we can trust to enforce that balance.
The quote was actually from Apple on a proposed British law that would require the company to devise methods to break into iPhone communications.
In the wake of Paris and San Bernardino we've heard calls for controls on both guns and cryptography with eerily similar arguments coming from defenders. If we ban guns/crypto then only bad people will have guns/crypto. Any attempts to limit guns/crypto is a slippery slope that takes away our constitutional rights and our freedoms.
I don't have a gun but I do use encryption, built into the iPhone and the various apps and browsers I use to communicate. Fat lot of good it does me as hackers stole my personal information because I shopped at Home Depot and Target. Because my health insurance runs through Anthem. Because I voted in the State of Georgia.
I am a strong believer in individual rights, a person should be able to use cryptography to protect their communications and a gun, if they wish, to protect their family. But I do see the value in gaining access in communications to stop a terrorist as well as making it harder for them to get the weapons to carry out their acts. Why can't the fingerprint technology that unlocks my iPhone also unlock a gun? The gun/crypto advocates don't trust to government to implement any restrictions reasonably and thus fight any changes.
No laws can completely eliminate or even restrict guns or crypto but they can make it harder to use. The challenges aren't technological, we can create guns or crypto protocols that perform as we want them to perform. The challenges are social, finding the right balance between rights and security and governments we can trust to enforce that balance.
Monday, December 21, 2015
Blog Followers
If you follow this or any other Blogger-based blog via a non-Google account, you'll need to follow with a Google account instead. Details.
Wednesday, December 16, 2015
Simons and Berkeley
The Simons Institute for the Theory of Computing in Berkeley held two programs this fall, Fine-Grained Complexity and Algorithm Design and Economics and Computation both ending this week. It would have been a great fall for me to spend there as I have strong connections in both groups, but as a department chair and father to a high-school senior it would have been impossible to be away from Atlanta for an extended period of time.
But I did manage some short trips, my first to Simons. In September I went to attend the workshop on Connections Between Algorithm Design and Complexity Theory but had to leave early and to make up for it I visited for a few days last week as well. The Simons Institute has taken over Calvin Lab, a circular building on the UC Berkeley campus. Lecture rooms on the first floor, wide-open working spaces on the second floor where most people congregate and visitor offices on the third floor. It's not just the space but an incredible group of complexity theorists and EC types there this fall.
The year marks thirty years since I spent that not-so-happy year of grad school in Berkeley. The city of Berkeley has mellowed a bit. When I lived there before, many recent graduates didn't leave and instead remained in their cheap rent-controlled apartments. Now you see almost a bimodal distribution centered around college-aged students and late middle-aged residents. Berkeley housing is getting hard to find again, this time because of overflow from limited affordable housing in San Francisco. Still Berkeley seems like a city that has never grown up. I guess people like it that way.
But I did manage some short trips, my first to Simons. In September I went to attend the workshop on Connections Between Algorithm Design and Complexity Theory but had to leave early and to make up for it I visited for a few days last week as well. The Simons Institute has taken over Calvin Lab, a circular building on the UC Berkeley campus. Lecture rooms on the first floor, wide-open working spaces on the second floor where most people congregate and visitor offices on the third floor. It's not just the space but an incredible group of complexity theorists and EC types there this fall.
The year marks thirty years since I spent that not-so-happy year of grad school in Berkeley. The city of Berkeley has mellowed a bit. When I lived there before, many recent graduates didn't leave and instead remained in their cheap rent-controlled apartments. Now you see almost a bimodal distribution centered around college-aged students and late middle-aged residents. Berkeley housing is getting hard to find again, this time because of overflow from limited affordable housing in San Francisco. Still Berkeley seems like a city that has never grown up. I guess people like it that way.
Thursday, December 10, 2015
Ada Lovelace (1815-1852)
Back in my teens I had ordered my first computer, a TRS-80 from Radio Shack, and I grew anxious in the weeks before it arrived. I learned the basics of BASIC and started writing programs on paper ready to be typed into this new machine. Imagine though if I had to wait not just a few weeks but over a hundred years. Such was the fate of Augusta Ada King, Countess of Lovelace, born two hundred years ago today.
Ada, daughter of the poet Lord Byron, showed an early talent in mathematics. At age 17 she became a friend of Charles Babbage, inventor of the difference engine and proposed a more complex analytic engine. Babbage gave a talk at the University of Turin and Ada had the task of translating an article written by Italian engineer Luigi Menabrea based on the talk. Ada did more than just translate, she added her own thoughts which included a program that could be run on the analytic engine computing a specific sequence of numbers. The analytic engine never got built and Ada never saw her program run but nevertheless she gets credit as the first computer programmer and most notably in 1980 the US Department of Defense named their preferred computer language "Ada" in her honor.
Here's to Lady Ada Lovelace, born way too far ahead of her time.
Ada, daughter of the poet Lord Byron, showed an early talent in mathematics. At age 17 she became a friend of Charles Babbage, inventor of the difference engine and proposed a more complex analytic engine. Babbage gave a talk at the University of Turin and Ada had the task of translating an article written by Italian engineer Luigi Menabrea based on the talk. Ada did more than just translate, she added her own thoughts which included a program that could be run on the analytic engine computing a specific sequence of numbers. The analytic engine never got built and Ada never saw her program run but nevertheless she gets credit as the first computer programmer and most notably in 1980 the US Department of Defense named their preferred computer language "Ada" in her honor.
Here's to Lady Ada Lovelace, born way too far ahead of her time.
Monday, December 07, 2015
crank math and crank people
In the same week I got email claming:
1) Babai had shown that GI is in quasipoly time.
2) Opeyemi Enoch, a Nigerian Mathematician, solved the Riemann hypothesis.
I believe Babai's claim since (a) he's worked on the problem a lot and is a known expert, (b) the result is believable, and (c) the math we know now seem up to the task.
I do not believe Enoch's claim. It would be unfair for me to not believe it because I never heard of him. However, my sources in math say that RH is not in a position to be solved. Also, articles I've read on the web including this article seem to say things that cast doubts on it such as this quote:
motivation to solve the problem came from his students, who brought it to him with the hope of making 1 million ``off the internet''
Bobby Jindall (who?) dropped out of the Prez race (why am I bringing this up? Is he working on RH?) Bobby Jindall was a Rhodes Scholar. (why am I bringing this up? Wait and see.)
In 2003 I was in a Taxi (ask your grandparents what Taxi's were before we had Uber) and the driver began telling me lots of conspiracies--- there is no Gold in Fort Knx, the novel Goldfinger was Ian Flemmings attempt to tell us this, Reagan was shot because he knew this, and the American Presidency is controlled by a secret cabal. I pointed out that if that's the case how did George Bush Sr (clearly a member of the Cabal) lose to Bill Clinton. His answer: Bill Clinton was a Rhodes Scholar and the Cabal is controlled by Rhode Scholars.
Its hard to argue with a conspiracy theorist about the past. But you can challenge him to predict the future. So I told him ``if you can predict who will be the Democratic Nominee for Prez in 2004 then I will look at your website and take it seriously. If not, then not'' He smiled and just said ``Wesley Clark was a Rhodes scholar'' If I had asked him who would be the republican nominee in 2016 then he might have said ``Bobby Jindal is a Rhodes Scholar''
The way to expose conspiracy theorists, astrologers, and The National Inquirer is to take their predictions seriously and test them. I wonder why anyone believes these things given their track record.
Judging math things is different- A math paper can be read and should stand on its own.
Thursday, December 03, 2015
Moonshots
In 1961 Kennedy said "this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth". In 1969 Neil Armstrong and Buzz Aldrin walked on the moon and returned to earth safely. In the 46 years hence man has gone no further.
Bill Gates founded Microsoft to place "a computer on every desk and in every home". Once he succeeded, Microsoft lost its way. Their current mission takes a different tact "Empower every person and every organization on the planet to achieve more."
In my life I've seen many a moonshot succeed, from a computer chess program that can beat the best humans, the classification of finite simple groups and the proof of Fermat's last theorem. They all seem to end up with a "Now what?". If you have a finite mission, you can only achieve finite things. We often think of moonshots as an amazing challenge but they serve as limits as well.
In computing we have a law that has become a mission, that the power of computing doubles roughly every two years. These improvements have enabled the incredible advances in learning, automation and connectivity that we see today. We keep making better computers because we don't have a limit just a goal of getting better than where we are today.
When I first started writing grant proposals, a senior faculty member chided me for saying "the ultimate goal of computational complexity is prove P ≠ NP". He said that would kill any grant proposals after we did settle the question. While we are in no danger of solving this problem in the near future, eventually we will and I'd hate to see the field whither afterwards.
My favorite mission comes from Star Trek, particularly the TNG version with "Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before."
Bill Gates founded Microsoft to place "a computer on every desk and in every home". Once he succeeded, Microsoft lost its way. Their current mission takes a different tact "Empower every person and every organization on the planet to achieve more."
In my life I've seen many a moonshot succeed, from a computer chess program that can beat the best humans, the classification of finite simple groups and the proof of Fermat's last theorem. They all seem to end up with a "Now what?". If you have a finite mission, you can only achieve finite things. We often think of moonshots as an amazing challenge but they serve as limits as well.
In computing we have a law that has become a mission, that the power of computing doubles roughly every two years. These improvements have enabled the incredible advances in learning, automation and connectivity that we see today. We keep making better computers because we don't have a limit just a goal of getting better than where we are today.
When I first started writing grant proposals, a senior faculty member chided me for saying "the ultimate goal of computational complexity is prove P ≠ NP". He said that would kill any grant proposals after we did settle the question. While we are in no danger of solving this problem in the near future, eventually we will and I'd hate to see the field whither afterwards.
My favorite mission comes from Star Trek, particularly the TNG version with "Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before."
Sunday, November 29, 2015
One more sign that a Journal is bogus. Or is it?
I went to an AMS conference with my High School Student Naveen (who presented this paper) and an ugrad from my REU program Nichole (who presented this paper). Note that this is a math conference- low prestige, mostly unrefereed, parallel sessions, but still a good place to pick up some new problems to work on and learn some things, and meet people.
Both Naveen and Nichole later got email from a journal urging them to submit their work! They were also invited to be on the Editorial Board! I can understand inviting a HS student who is a SENIOR to be on an editorial board, but Naveen is a SOPHMORE!
Naveen and Nichole both emailed me asking what this was about and I looked around and found, not to my surprise, that the journal is an author-pays journal of no standards. On the other hand, it was open access, on the other other hand, they had an article claiming that R(4,6)=36 (might be true, but if this was known, I would know it, see this blog post for more on that proof technique).
The pay-to-publish model is not necc. bad, and is standard in some fields, but is unusual for math. Of perhaps more importance is that the journal had no standards. And they prey on the young and naive.
Or do they?
Consider the following scenario: Naveen publishes in this journal and this publication is on his college application, hence he gets into a good college with scholarship. He knows exactly what he is buying. Or Nicole does this to help her Grad school Application. Or I do this for my resume and get a salary increase. Or an untenured prof does this to get tenure ( Deans can count but they can't read). And it gets worse--- the school gives the prof tenure, knowing the papers are bogus, but now they can say they have a prof who publishes X papers a year! At some point I don't know who is scamming who.
This blog post (not mine) gives several criteria for when a journal is bogus. I'll add one: When they ask a 15 years old to be on their editorial board.
Both Naveen and Nichole later got email from a journal urging them to submit their work! They were also invited to be on the Editorial Board! I can understand inviting a HS student who is a SENIOR to be on an editorial board, but Naveen is a SOPHMORE!
Naveen and Nichole both emailed me asking what this was about and I looked around and found, not to my surprise, that the journal is an author-pays journal of no standards. On the other hand, it was open access, on the other other hand, they had an article claiming that R(4,6)=36 (might be true, but if this was known, I would know it, see this blog post for more on that proof technique).
The pay-to-publish model is not necc. bad, and is standard in some fields, but is unusual for math. Of perhaps more importance is that the journal had no standards. And they prey on the young and naive.
Or do they?
Consider the following scenario: Naveen publishes in this journal and this publication is on his college application, hence he gets into a good college with scholarship. He knows exactly what he is buying. Or Nicole does this to help her Grad school Application. Or I do this for my resume and get a salary increase. Or an untenured prof does this to get tenure ( Deans can count but they can't read). And it gets worse--- the school gives the prof tenure, knowing the papers are bogus, but now they can say they have a prof who publishes X papers a year! At some point I don't know who is scamming who.
This blog post (not mine) gives several criteria for when a journal is bogus. I'll add one: When they ask a 15 years old to be on their editorial board.
Monday, November 23, 2015
Star Trek Computing
In the wake of Leonard Nimoy's death last February, I decided to rewatch the entire original Star Trek series, all 79 episodes. I had watched them each many times over in high school in the 70's, though the local station removed a scene or two from each episode to add commercial time and I often missed the opening segment because I didn't get home from school in time. Back in those stone ages we had no DVR or other method to record shows. I hadn't seen many episodes of the original series since high school.
Now I can watch the entire episodes whenever I want in full and in order through the magic of Netflix. I finished this quest a few days ago. Some spoilers below.
I could talk about the heavy sexism, the ability to predict future technologies (the flat screen TV in episode 74), the social issues in the 23rd century as viewed from the 60's, or just the lessons in leadership you can get from Kirk. Given the topic of this blog, let's talk about computing in Star Trek which they often just get so wrong, such as when Spock asks the computer to compute the last digit of π to force Jack-the-Ripper to remove his consciousness from the ship's computers.
Too many episodes end with Kirk convincing a computer or robot to destroy itself. I'd like to see him try that with Siri. In one such episode "The Ultimate Computer", a new computer is installed in the Enterprise that replaces most of the crew. A conversation between Kirk and McCoy sounds familiar to many we have today (source).
MCCOY: Did you see the love light in Spock's eyes? The right computer finally came along. What's the matter, Jim?
KIRK: I think that thing is wrong, and I don't know why.
MCCOY: I think it's wrong, too, replacing men with mindless machines.
KIRK: I don't mean that. I'm getting a Red Alert right here. (the back of his head) That thing is dangerous. I feel. (hesitates) Only a fool would stand in the way of progress, if this is progress. You have my psychological profiles. Am I afraid of losing my job to that computer?
MCCOY: Jim, we've all seen the advances of mechanisation. After all, Daystrom did design the computers that run this ship.
KIRK: Under human control.
MCCOY: We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.
KIRK: Am I afraid of losing command to a computer? Daystrom's right. I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?
MCCOY: Jim, if you have the awareness to ask yourself that question, you don't need me to answer it for you. Why don't you ask James T. Kirk? He's a pretty honest guy.
Later in the episode the computer starts behaving badly and Kirk has to convince it to shut itself down. But what if the computer just did its job? Is that our real future: Ships that travel to stars controlled only by machine. Or are we already there?
Now I can watch the entire episodes whenever I want in full and in order through the magic of Netflix. I finished this quest a few days ago. Some spoilers below.
I could talk about the heavy sexism, the ability to predict future technologies (the flat screen TV in episode 74), the social issues in the 23rd century as viewed from the 60's, or just the lessons in leadership you can get from Kirk. Given the topic of this blog, let's talk about computing in Star Trek which they often just get so wrong, such as when Spock asks the computer to compute the last digit of π to force Jack-the-Ripper to remove his consciousness from the ship's computers.
Too many episodes end with Kirk convincing a computer or robot to destroy itself. I'd like to see him try that with Siri. In one such episode "The Ultimate Computer", a new computer is installed in the Enterprise that replaces most of the crew. A conversation between Kirk and McCoy sounds familiar to many we have today (source).
MCCOY: Did you see the love light in Spock's eyes? The right computer finally came along. What's the matter, Jim?
KIRK: I think that thing is wrong, and I don't know why.
MCCOY: I think it's wrong, too, replacing men with mindless machines.
KIRK: I don't mean that. I'm getting a Red Alert right here. (the back of his head) That thing is dangerous. I feel. (hesitates) Only a fool would stand in the way of progress, if this is progress. You have my psychological profiles. Am I afraid of losing my job to that computer?
MCCOY: Jim, we've all seen the advances of mechanisation. After all, Daystrom did design the computers that run this ship.
KIRK: Under human control.
MCCOY: We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.
KIRK: Am I afraid of losing command to a computer? Daystrom's right. I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?
MCCOY: Jim, if you have the awareness to ask yourself that question, you don't need me to answer it for you. Why don't you ask James T. Kirk? He's a pretty honest guy.
Later in the episode the computer starts behaving badly and Kirk has to convince it to shut itself down. But what if the computer just did its job? Is that our real future: Ships that travel to stars controlled only by machine. Or are we already there?
Thursday, November 19, 2015
A Silly String Theorem
First a note on a serious theorem: Babai has posted a video (mp4, 1h 40 m, 653MB) of his first talk on his Graph Isomorphism algorithm.
I was giving a talk on the Kleene star operator (don't ask) and came across this cute little problem. Say a language L commutes if for all u,v in L, uv=vu.
Problem: Show that L commutes if and only if L is a subset of w* for some fixed string w.
Here w* is the set of strings consisting of zero or more concatenations of w with itself. The if case is easy, but I found the other direction pretty tricky and came up with an ugly proof. I found a cleaner proof in Seymour Ginsburg's 1966 textbook The Mathematical Theory of Context Free Languages which I present here.
⇐ If u=wi and v=wj then uv=vu=wi+j.
⇒ Trivial if L contains at most one string. Assume L contains at least two strings.
Proof by induction on the length of the shortest non-empty string v in L.
Base case: |v|=1
Suppose there is an x in L with x not in v*. Then x = vibz for b in Σ-{v}. Then the i+1st character of xv is b and the i+1st character of vx is v, contradicting xv=vx. So we can let w=v.
Inductive case: |v|=k>1.
Lemma 1: Let y=vru be in L (u might be empty). Then uv=vu.
Proof: Since yv=vy we have vruv=vvru=vrvu. So vu=uv.
Let x in L be a string not in v* (if no such x exists we can let w=v). Pick j maximum such that x=vjz with z not the empty string. By Lemma 1 we have zv=vz. Note |z| < |v| otherwise v is a prefix of z, contradicting the maximality of j.
Lemma 2: For all y in L we have yz=zy.
Proof: As before let y = vru. By Lemma 1, uv=vu
Since yx=xy we have vruvjz = vjzvru
vruvjz = vruzvj by swapping v's and z's.
vjzvru = zvruvj by swapping v's and z's, and v's and u's.
So we have vruzvj = zvruvj
or yzvj = zyuj and thus yz=zy.
By Lemma 2, the set L∪{z} commutes. Since 0 < |z| < |v| by induction L∪{z} is a subset of w* for some w so L is a subset of w*.
I was giving a talk on the Kleene star operator (don't ask) and came across this cute little problem. Say a language L commutes if for all u,v in L, uv=vu.
Problem: Show that L commutes if and only if L is a subset of w* for some fixed string w.
Here w* is the set of strings consisting of zero or more concatenations of w with itself. The if case is easy, but I found the other direction pretty tricky and came up with an ugly proof. I found a cleaner proof in Seymour Ginsburg's 1966 textbook The Mathematical Theory of Context Free Languages which I present here.
⇐ If u=wi and v=wj then uv=vu=wi+j.
⇒ Trivial if L contains at most one string. Assume L contains at least two strings.
Proof by induction on the length of the shortest non-empty string v in L.
Base case: |v|=1
Suppose there is an x in L with x not in v*. Then x = vibz for b in Σ-{v}. Then the i+1st character of xv is b and the i+1st character of vx is v, contradicting xv=vx. So we can let w=v.
Inductive case: |v|=k>1.
Lemma 1: Let y=vru be in L (u might be empty). Then uv=vu.
Proof: Since yv=vy we have vruv=vvru=vrvu. So vu=uv.
Let x in L be a string not in v* (if no such x exists we can let w=v). Pick j maximum such that x=vjz with z not the empty string. By Lemma 1 we have zv=vz. Note |z| < |v| otherwise v is a prefix of z, contradicting the maximality of j.
Lemma 2: For all y in L we have yz=zy.
Proof: As before let y = vru. By Lemma 1, uv=vu
Since yx=xy we have vruvjz = vjzvru
vruvjz = vruzvj by swapping v's and z's.
vjzvru = zvruvj by swapping v's and z's, and v's and u's.
So we have vruzvj = zvruvj
or yzvj = zyuj and thus yz=zy.
By Lemma 2, the set L∪{z} commutes. Since 0 < |z| < |v| by induction L∪{z} is a subset of w* for some w so L is a subset of w*.
Monday, November 16, 2015
Is the word Quantum being used properly by civilians? Understood by them? You know the answer.
I've seen the word `civilians' expanded in use from non-military to non-X for some X. Not sure I've ever seen `civilians' mean `people who don't do math stuff' until the title of todays post. Well, there is a first time for everything.
I've blogged in the past about the use of the word Quantum (here) . The phrase Quantum Leap means a BIG leap, where as Quantum stuff is small. Though, to be fair, the discovery (invention?) of Quantum Mechanics was a big leap. So maybe that IS proper use. The James Bond movie Quantum of Solace uses Quantum to mean small, the ONLY time I've seen Quantum used to mean small, so Kudos to the title of an absolutely awful movie. Commenters on my prior blog on the subject pointed out that the original meaning of
quantum was quantity or amount without regard to size of discreteness. I think using it that way now would be very rare.
I came across a theatre called Quantum Theatre. What is Quantum Theatre? The following are actual quotes from their website.
Quantum artists mine all kinds of non-traditional spaces for the sensory possibilities they offer when combined with creative design.
We find it meaningful to place the audience and performer together, the moving parts inside the works.
We want to move people with our experiements.
The shows run the gamut from those you thought you knew but now experience like never before, to shows that didn’t exist until their elements mixed in our laboratory.
I came across this article in a place it didn't belong-- in the middle of an article about Google and NASA trying to build a quantum computer (see here.) This news might be exciting but the article was full of mistakes and bad-signs so I'm not to excited about it. Plus the reference to Quantum Theatre is just odd.
The BEST use of the word quantum that I've heard recently was in the episode The Ricks must be crazy of the excellent TV show Ricky and Morty:
The car is broken
Morty (a 14 year old): W-Whats wrong Rick? Is it the Quantum carburetor or something?
Rick (his grandfather, a brilliant scientist): Quantum carburetor? You can't just add a Sci-fi word to a car word and hope it means something.
W-what's wrong, Rick? Is it the quantum carburetor or something?
Read more at: http://transcripts.foreverdreaming.org/viewtopic.php?f=364&t=20185
Read more at: http://transcripts.foreverdreaming.org/viewtopic.php?f=364&t=20185
W-what's wrong, Rick? Is it the quantum carburetor or something?
Read more at: http://transcripts.foreverdreaming.org/viewtopic.php?f=364&t=20185
Read more at: http://transcripts.foreverdreaming.org/viewtopic.php?f=364&t=20185
Thursday, November 12, 2015
A Primer on Graph Isomorphism
I spent 14 years on the faculty at the University of Chicago. I know László Babai well, we collaborated on some of my best known work. I also know Ryerson 251, a room where I've seen hundreds of talks and given more than a few myself. So I could imagine the excitement in that room on Tuesday as Babai gave the most anticipated talk in the history of theoretical computer science, the first of several talks Babai is giving on his new algorithm for graph isomorphism [Video]. Gabriel Gaster extensively live tweeted the event. Jeremy Kun has some details. Update (12/14): Paper now posted.
For this post instead of doing a bad job trying to overview Babai's proof, I'll explain the graph isomorphism problem and why it is important in the complexity world.
Suppose we have two high schools, say HS North and HS South, each with 1000 students. Consider a diagram (or graph) containing a point for each student at HS North with lines between students who are facebook friends, and a similar diagram for HS South. Is there a 1-1 map from the students at HS North to the students at HS South so that these diagrams look identical? That's the graph isomorphism problem.
To determine whether these two graphs were isomorphic you could look at all the possible mappings between students, but that's 1000! or more than 4x102567 possible maps. There has long been known how to search a smaller number of possibilities, especially if we put some restrictions on the diagrams, but always exponential in n (the number of students) in the worst case. Ideally we'd like a polynomial-time algorithm and Babai gets very close, an algorithm that runs in time nlogkn time for some fixed k.
Graph Isomorphism is one of those few problems, like factoring, known to be in NP but not known to be in P or NP-complete. Graph non-isomorphism is the poster child for the class AM, the problems solved by a randomized verifier asking a single question to a powerful prover. Graph non-isomorphism in AM implies that Graph Isomorphism is not likely NP-complete and that under reasonable derandomization assumptions that Graph non-isomorphism is in NP. Kobler, Schöning and Toran wrote a whole book on the computational complexity issues of graph isomorphism.
Even small progress in graph isomorphism creates waves. At a theory conference in the late 80's a speaker caused a major stir when he casually mentioned he had a proof (he didn't) that Graph non-isomorphism was in NP. Babai's announced caused a huge tsunami and those of us who know him realize he wouldn't make such an announcement without being sure he has a proof. The talk put together a large number of ideas from combinatorics and graph theory. My impression is that those who saw the talk didn't leave convinced of the proof, but did feel Babai had found the pieces to make it work.
With Babai's breakthrough algorithm, the smart money says now that graph isomorphism sits in P. It took decades to get from quasipolynomial time to polynomial time for primality testing and the same time frame may be needed to get polynomial time for graph isomorphism. But it will likely happen and the complexity of graph isomorphism then gets a whole lot simpler.
A couple of thoughts: All the breakthrough results that I can remember were released as papers, ready to devour. This is the first result of this caliber I remember being announced as a talk.
Also we think of theory as a young person's game, most of the big breakthroughs coming from researchers early in their careers. Babai is 65, having just won the Knuth Prize for his lifetime work on interactive proofs, group algorithms and communication complexity. Babai uses his extensive knowledge of combinatorics and group theory to get his algorithm. No young researcher could have had the knowledge base or maturity to be able to put the pieces together the way that Babai did.
More on Babai and graph isomorphism from Scott, Luca, Bill, Tim, Dick and Ken, Reddit, Science News, New Scientist and Science.
Sunday, November 08, 2015
Looking forward to the GI result
As you all know Laszlo Babai will give a talk Tuesday Nov 10 on a result: GI in quasipolynomial time (2 to a polylog). Other theory blogs have already commented on this (GLL,In Theory,Shetl-Opt)
When I was in Graduate School (early 1980's) it looked like GI would get into P. Three key results: graphs of bounded degree, graphs of bounded genus, graphs of bounded eigenvalue multiplicity, were all shown to be in P. These results used group theory and linear algebra in serious ways so the thought was that more advanced group theory, perhaps the classification of all finite simple groups (CFSG) would be used to show GI in P. If CFSG was used in an algorithm for GI in P then the algorithm might have an enormous constant, but that's okay since the quest for GI in P is not for practical reasons (I think that all the people who want to solve it already have fast enough algorithms) . I was looking forward to the debates on if an algorithm with that big a constant would count as poly time (I would say yes). But no algorithm for GI in P, using CFSG or not, was found. Also note that it was possible (and is still possible) that GI is NOT in P. In my 2012 survey of P vs NP I also asked about GI. Of the 20 people who responded 14 thought GI is in P, 6 thought not. Note that GI is not known to be in co-NP or BPP. It IS in IP[2], and if GI was NPC
then PH collapses, so it is unlikely that GI is NP-complete. This tells us NOTHING about whether it is in P.
An analogous thing happened with PRIMES. Lots of results that got it almost in P, lots of hard number theory used, but then the progress stopped. Two differences: Most people thought PRIMES IS in P (likely because it's in coNP and BPP),and this was finally proven in 2002.
Is Babai's proof likely to be correct? There are three parameters to consider when looking at that question: (1) Who is making the claim?, (2) How likely the claim is to be true?, (3) How likely the claim is to be provable with what is known today?.You can give different weights to each one in different combinations..
Laszlo Babai is an expert in GI with a track record in the area. (reading that over again it undersells the case- Babai is, as the kids say, awesome!) That GI is in quasipolynomial time is quite believable. Even those 6 who think that Gi is not in P might believe that. Its harder to know if today's techniques are up to the task; however, there are no results like `group theory won't suffice' or any kind of obstacles, so certainly the claim seems provable with todays technology.
Here's hoping...
When I was in Graduate School (early 1980's) it looked like GI would get into P. Three key results: graphs of bounded degree, graphs of bounded genus, graphs of bounded eigenvalue multiplicity, were all shown to be in P. These results used group theory and linear algebra in serious ways so the thought was that more advanced group theory, perhaps the classification of all finite simple groups (CFSG) would be used to show GI in P. If CFSG was used in an algorithm for GI in P then the algorithm might have an enormous constant, but that's okay since the quest for GI in P is not for practical reasons (I think that all the people who want to solve it already have fast enough algorithms) . I was looking forward to the debates on if an algorithm with that big a constant would count as poly time (I would say yes). But no algorithm for GI in P, using CFSG or not, was found. Also note that it was possible (and is still possible) that GI is NOT in P. In my 2012 survey of P vs NP I also asked about GI. Of the 20 people who responded 14 thought GI is in P, 6 thought not. Note that GI is not known to be in co-NP or BPP. It IS in IP[2], and if GI was NPC
then PH collapses, so it is unlikely that GI is NP-complete. This tells us NOTHING about whether it is in P.
An analogous thing happened with PRIMES. Lots of results that got it almost in P, lots of hard number theory used, but then the progress stopped. Two differences: Most people thought PRIMES IS in P (likely because it's in coNP and BPP),and this was finally proven in 2002.
Is Babai's proof likely to be correct? There are three parameters to consider when looking at that question: (1) Who is making the claim?, (2) How likely the claim is to be true?, (3) How likely the claim is to be provable with what is known today?.You can give different weights to each one in different combinations..
Laszlo Babai is an expert in GI with a track record in the area. (reading that over again it undersells the case- Babai is, as the kids say, awesome!) That GI is in quasipolynomial time is quite believable. Even those 6 who think that Gi is not in P might believe that. Its harder to know if today's techniques are up to the task; however, there are no results like `group theory won't suffice' or any kind of obstacles, so certainly the claim seems provable with todays technology.
Here's hoping...
Thursday, November 05, 2015
Computation and Journalism Part 2
The theory blogosphere is atwitter with an announcement of a talk next Tuesday at Chicago by László Babai Graph Isomorphism in Quasipolynomial Time. Luca will blog from the scene and we will have a full report as details emerge.
More from the Comp and Journalism conference
0) (ADDED WRITE BEFORE POSTING) The conference was about how journalists do or should use technology. Part of this is how news gets to people. Twitter and Blogs are one method, as the above note about Babai's talk shows.
1) There was a panel discussion on context When one reads an article it depends on where one is coming from. Here is a strange thing that technology has allowed journalists to do: An article can have slight differences depending on where its being read. For example if someone from New York is reading it may refer to Broadway, whereas if someone in LA is reading it it might refer to Hollywood. The tech also exists to have the reader (online) have a choice- so you can read it as if you are in place X. This scares me a bit- what if an article is can be tinkered with to appeal to lefthanded latino lesbians who work on the local
Lovasz Lemma. Isn't our country fragmented enough? STILL, the point that context matters came through nicely.
2) There was a panel of three ACTUAL JOURNALISTS (I think that all of the other speakers were academics) who used BIG DATA or MACHINE LEARNING or FILL IN BUZZWORD to write articles. Propublica, a nonprofit newspaper (aren't they all?) did an article about rating doctors (here ) that seems to be very well checked (example- if a patient has to come back again because of a complication, is it the surgeons fault? hard to answer). However, some doctors (those that got low ratings?) complained that there article was not peer reviewed. This brings up a question--- journalists run things by fact checkers and proofreaders, but do not have peer-review. Academics do. Which is the better system? Another journalist did a study of the Supreme court and found that there is a small set of lawyers that are well connected (one used to play darts with Alito) who are involved in over half of the cases the court sees. What interested me is--- what if you have an idea for an article and you do ALL of the number crunching, bring in ML people to help and find out in the end OH, the Supreme court actually is doing a fair job or OH, Doctors are pretty good. No story! They said this happens about 1/3 of the time where either you don't have enough data for a story or the story is not really a story.
3) There was a paper about YELP ratings of doctors. It seems that the actual
doctor-stuff is rarely mentioned, usually it was how long the wait time was,
how boring the magazines in the office were, etc. Also, most doctors got
either 1 star or 5 stars. One thing they didn't mention but I will- Walter Palmer, the Dentist who killed Lion in Africa, got lots of very negative YELP reviews from people who were never patients of his. So YELP ratings can be skewed by things independent of what is supposed to be measured. That is, of course, and extreme case.
4) Key Note by Chris Wiggins who is an academic who, on his Sabbatical, worked for the NYT on issues of How many copies of the paper should we print? and other Operations Research type questions. Print subscriptions still out number online subscriptions, the online is doing pretty well and bringing in money, but they still NEED to find a better business model.I am reminded that when Jeff Bezos bought the Wash Post Stephen Colbert said there are more people buying newspapers than there are people buying newspapers.
(My spellchecker things online is not a word. Everyone else does.)
5) There was a panel on Education. From there point of view very pessimistic---
most schools don't offer courses in how to use Technology in journalism, no real
books, no agreement on what's important. And I suspect they will have a hard time updating courses once they have them. I wonder if the early days of Comp Science were like that.
6) The issue of jobs in journalism going away was not quite ignored but also not
quite confronted either. Some people told me that Journalism schools are not being honest with their students about the dismal job market. Then again, they are journalism students- they should investigate!
7) A journalism student told me of a case going to the supreme court that may be
very important for privacy: Spokeo vs Robins: here
8) UPSHOT- I am glad I went, I learned some things outside of what I usually think about. Unlike the RATLOCC (Ramsey Theory and Logic) I was able to tell my mom what I learned. I didn't bring my laptop nor had TV access I was able to work on some math and had a minor breakthrough. But that,s for a later blog
(My spellcheck thinks blog is not a word. It also thinks spellcheck is not a word.)
More from the Comp and Journalism conference
0) (ADDED WRITE BEFORE POSTING) The conference was about how journalists do or should use technology. Part of this is how news gets to people. Twitter and Blogs are one method, as the above note about Babai's talk shows.
1) There was a panel discussion on context When one reads an article it depends on where one is coming from. Here is a strange thing that technology has allowed journalists to do: An article can have slight differences depending on where its being read. For example if someone from New York is reading it may refer to Broadway, whereas if someone in LA is reading it it might refer to Hollywood. The tech also exists to have the reader (online) have a choice- so you can read it as if you are in place X. This scares me a bit- what if an article is can be tinkered with to appeal to lefthanded latino lesbians who work on the local
Lovasz Lemma. Isn't our country fragmented enough? STILL, the point that context matters came through nicely.
2) There was a panel of three ACTUAL JOURNALISTS (I think that all of the other speakers were academics) who used BIG DATA or MACHINE LEARNING or FILL IN BUZZWORD to write articles. Propublica, a nonprofit newspaper (aren't they all?) did an article about rating doctors (here ) that seems to be very well checked (example- if a patient has to come back again because of a complication, is it the surgeons fault? hard to answer). However, some doctors (those that got low ratings?) complained that there article was not peer reviewed. This brings up a question--- journalists run things by fact checkers and proofreaders, but do not have peer-review. Academics do. Which is the better system? Another journalist did a study of the Supreme court and found that there is a small set of lawyers that are well connected (one used to play darts with Alito) who are involved in over half of the cases the court sees. What interested me is--- what if you have an idea for an article and you do ALL of the number crunching, bring in ML people to help and find out in the end OH, the Supreme court actually is doing a fair job or OH, Doctors are pretty good. No story! They said this happens about 1/3 of the time where either you don't have enough data for a story or the story is not really a story.
3) There was a paper about YELP ratings of doctors. It seems that the actual
doctor-stuff is rarely mentioned, usually it was how long the wait time was,
how boring the magazines in the office were, etc. Also, most doctors got
either 1 star or 5 stars. One thing they didn't mention but I will- Walter Palmer, the Dentist who killed Lion in Africa, got lots of very negative YELP reviews from people who were never patients of his. So YELP ratings can be skewed by things independent of what is supposed to be measured. That is, of course, and extreme case.
4) Key Note by Chris Wiggins who is an academic who, on his Sabbatical, worked for the NYT on issues of How many copies of the paper should we print? and other Operations Research type questions. Print subscriptions still out number online subscriptions, the online is doing pretty well and bringing in money, but they still NEED to find a better business model.I am reminded that when Jeff Bezos bought the Wash Post Stephen Colbert said there are more people buying newspapers than there are people buying newspapers.
(My spellchecker things online is not a word. Everyone else does.)
5) There was a panel on Education. From there point of view very pessimistic---
most schools don't offer courses in how to use Technology in journalism, no real
books, no agreement on what's important. And I suspect they will have a hard time updating courses once they have them. I wonder if the early days of Comp Science were like that.
6) The issue of jobs in journalism going away was not quite ignored but also not
quite confronted either. Some people told me that Journalism schools are not being honest with their students about the dismal job market. Then again, they are journalism students- they should investigate!
7) A journalism student told me of a case going to the supreme court that may be
very important for privacy: Spokeo vs Robins: here
8) UPSHOT- I am glad I went, I learned some things outside of what I usually think about. Unlike the RATLOCC (Ramsey Theory and Logic) I was able to tell my mom what I learned. I didn't bring my laptop nor had TV access I was able to work on some math and had a minor breakthrough. But that,s for a later blog
(My spellcheck thinks blog is not a word. It also thinks spellcheck is not a word.)
Monday, November 02, 2015
Conference on Computation and Journalism (Part I)
(In honor of Boole's Birthday yesterday see this.)
I recently (Oct 2 and 3 2015) went to a conference on Computation and Journalism (see here for their schedule). I went since I co-blog and have a Wikipedia page (here) hence I"ve gotten interested in issues of technology and information (not information-theory, but real world information).
I describe some of what I saw in two posts, today and thursday. Many of the articles I mention can be gotten from the schedule page above.
1) I understood all of the talks. This is very different from STOC, FOCS, CCC, etc; however, old timers tell me that back in STOC 1980 or so one could still understand all of the talks.
2) Lada Adamic gave a keynote on Facebook about whether or not Facebook causes an echo chamber (e.g., liberals only having liberal friends) Her conclusion was NO- about 25% of the friends of an L are a C and vice versa. This makes sense since people friend people who are workers and family, not based on political affiliation (Some of my in-laws are farther to the right than... most of my readers.) She also noted that 10% of the articles passed on by L's are Conservative- though this might not be quite indicative since they may often be `look at this crap that FOX news is saying now' variety. Note that Lada works for Facebook. Talking to people over lunch about that the consensus was that (1) the study was valid, but (2) if the conclusion had been that Facebook is tearing apart our country THEN would they have been allowed to publish it? I leave that as a question.
3) There was a Panel on Comments. The NYT has 14 people whose sole job is the soul-crushing job of reading peoples comments on articles and deciding what to let through. Gee, complexity blog just needs Lance and me. I found out that the best way to ensure intelligent comments and to get rid of trolls is (1) people must register, give their real name, and even have a picture of themselves, and (2) engage with the commenters (like Scott A does, though they did not use him as an example). Lance and I do neither of these things. Oh well.(3)COMPUTERS- can we have a program that can tell if an comment is worth posting? Interesting research question. One basic technique is to not allow comments with curse words in them- NOT out of a prudish impulse, but because curse words are an excellent indicator of articles that will not add to the discourse.
The most interesting question raised was Do Trolls know the are trolls? YES- they just like destroying things for no reason. Picture Bart Simpson using a hammer on Mustard packets just cause.
4) There was two sessions of papers on Bias in the news.
a) Ranking in the Age of Algorithms and Curated News By Suman Deb Rob. How should news articles be ranked? If articles are ranked by popularity then you end up covering Donald Trump too much. If articles are ranked by what the editors think thats too much of THE MAN telling me whats important. Also, there are articles that are important for a longer period of time but never really top-ten important. Nate Silver has written that the most important story of the 2016 election is that the number of serious (defined: Prior Senators of Govs) who are running for the Rep nomination is far larger than ever before. But thats not quite a story.
b) The Gamma: Programming tools for transparent data journalism by Tomas Petricek. So there are data journalists that you can see right through! Wow! Actually he had a tool so that you could, using World Bank Data, get maps that show stuff via colors. His example was the story that China produces more carbon emission than the USA, and a nice color graphic for it, but then the user or the journalist can easily get a nice color graphic of carbon-emms-per-person in which case the USA is still NUMBER ONE! (Yeah!) Later he had a Demo session. I tried to look at % of people who go to college by country, but it didn't have info on some obscure countries like Canada. Canada? The project was Limited by the data we have available.
d) Consumer and supplies: Attention Asymmetries by Abbar, An, Kwak, Messaoui, Borge-Holthofer. They had 2 million comments posted by 90,000 unique readers and analyzed this data to see if there is an asymtery between what readers want and what they are getting. Answer: YES
I recently (Oct 2 and 3 2015) went to a conference on Computation and Journalism (see here for their schedule). I went since I co-blog and have a Wikipedia page (here) hence I"ve gotten interested in issues of technology and information (not information-theory, but real world information).
I describe some of what I saw in two posts, today and thursday. Many of the articles I mention can be gotten from the schedule page above.
1) I understood all of the talks. This is very different from STOC, FOCS, CCC, etc; however, old timers tell me that back in STOC 1980 or so one could still understand all of the talks.
2) Lada Adamic gave a keynote on Facebook about whether or not Facebook causes an echo chamber (e.g., liberals only having liberal friends) Her conclusion was NO- about 25% of the friends of an L are a C and vice versa. This makes sense since people friend people who are workers and family, not based on political affiliation (Some of my in-laws are farther to the right than... most of my readers.) She also noted that 10% of the articles passed on by L's are Conservative- though this might not be quite indicative since they may often be `look at this crap that FOX news is saying now' variety. Note that Lada works for Facebook. Talking to people over lunch about that the consensus was that (1) the study was valid, but (2) if the conclusion had been that Facebook is tearing apart our country THEN would they have been allowed to publish it? I leave that as a question.
3) There was a Panel on Comments. The NYT has 14 people whose sole job is the soul-crushing job of reading peoples comments on articles and deciding what to let through. Gee, complexity blog just needs Lance and me. I found out that the best way to ensure intelligent comments and to get rid of trolls is (1) people must register, give their real name, and even have a picture of themselves, and (2) engage with the commenters (like Scott A does, though they did not use him as an example). Lance and I do neither of these things. Oh well.(3)COMPUTERS- can we have a program that can tell if an comment is worth posting? Interesting research question. One basic technique is to not allow comments with curse words in them- NOT out of a prudish impulse, but because curse words are an excellent indicator of articles that will not add to the discourse.
The most interesting question raised was Do Trolls know the are trolls? YES- they just like destroying things for no reason. Picture Bart Simpson using a hammer on Mustard packets just cause.
4) There was two sessions of papers on Bias in the news.
a) Ranking in the Age of Algorithms and Curated News By Suman Deb Rob. How should news articles be ranked? If articles are ranked by popularity then you end up covering Donald Trump too much. If articles are ranked by what the editors think thats too much of THE MAN telling me whats important. Also, there are articles that are important for a longer period of time but never really top-ten important. Nate Silver has written that the most important story of the 2016 election is that the number of serious (defined: Prior Senators of Govs) who are running for the Rep nomination is far larger than ever before. But thats not quite a story.
b) The Gamma: Programming tools for transparent data journalism by Tomas Petricek. So there are data journalists that you can see right through! Wow! Actually he had a tool so that you could, using World Bank Data, get maps that show stuff via colors. His example was the story that China produces more carbon emission than the USA, and a nice color graphic for it, but then the user or the journalist can easily get a nice color graphic of carbon-emms-per-person in which case the USA is still NUMBER ONE! (Yeah!) Later he had a Demo session. I tried to look at % of people who go to college by country, but it didn't have info on some obscure countries like Canada. Canada? The project was Limited by the data we have available.
c) The quest to automate Fact-Checking by Hassan, Adair, Hamilton, Li, Tremayne, Yang, Yu. We are pretty far from being able to do this, however, they had a program that could identify whether or not something uttered IS checkable . When I am president I will lower taxes is not checkable, whereas Bill Mahr's claim that Donald Trumps father is an Orangutang is checkable.
d) Consumer and supplies: Attention Asymmetries by Abbar, An, Kwak, Messaoui, Borge-Holthofer. They had 2 million comments posted by 90,000 unique readers and analyzed this data to see if there is an asymtery between what readers want and what they are getting. Answer: YES
George Boole (1815-1864)
George Boole, the father of symbolic logic, was born two hundred years ago today. His 1854 treatise, An investigation into the Laws of Thought, on Which are founded the Mathematical Theories of Logic and Probabilities, gave us what we now call Boolean algebra, variables that take on just two values, TRUE and FALSE, and the basic operations AND, OR and NOT. Boole could not have imagined that a century later this logic would form the foundation for digital computers.
Where would computational complexity be without Boolean algebra? We can use AND, OR, and NOT gates in place of Turing machines to capture computation: A polynomial-sized circuit of these gates can simulate any polynomial-time computable algorithm.
Stephen Cook analyzed the complexity of determining whether a formula φ of Boolean variables connected by AND, OR and NOTs was a tautology, a mathematical law. Cook showed that every nondeterministic polynomial-time computable problem reduced to checking if φ is not a tautology, or equivalently that (NOT φ) is satisfiable. That paper, as you well know, gave us the P v NP problem.
So here's to George Boole, whose desire to give mathematics a firm foundation produced simple tools powerful enough to describe everything.
Where would computational complexity be without Boolean algebra? We can use AND, OR, and NOT gates in place of Turing machines to capture computation: A polynomial-sized circuit of these gates can simulate any polynomial-time computable algorithm.
Stephen Cook analyzed the complexity of determining whether a formula φ of Boolean variables connected by AND, OR and NOTs was a tautology, a mathematical law. Cook showed that every nondeterministic polynomial-time computable problem reduced to checking if φ is not a tautology, or equivalently that (NOT φ) is satisfiable. That paper, as you well know, gave us the P v NP problem.
So here's to George Boole, whose desire to give mathematics a firm foundation produced simple tools powerful enough to describe everything.
Thursday, October 29, 2015
S. Barry Cooper (1943-2015)
Barry Cooper, a computability theorist and professor at the University of Leeds, passed away on Monday after a brief illness. Cooper was a big proponent of computability and Alan Turing in particular. He chaired the advisor committee for the Turing Year celebrations in 2012, and organized a number of events in Cambridge around the centennial of Turing's birth on June 23.
For the Turing year, Cooper and Jan van Leeuwen co-edited a massive volume Alan Turing: His Work and Impact which includes all of Turing's publications and a number of commentaries on the legacy of that work. I wrote a piece on Turing's Dots for the volume. Alan Turing: His Work and Impact received the R. R. Hawkins Award, the top award for professional and scholarly publishing across all the arts and sciences.
Barry Cooper was also the driving force behind Computability in Europe, an eclectic meeting to discuss all things related to computation.
I hope Barry now gets to meet his hero Turing but we'll miss him greatly down here.
For the Turing year, Cooper and Jan van Leeuwen co-edited a massive volume Alan Turing: His Work and Impact which includes all of Turing's publications and a number of commentaries on the legacy of that work. I wrote a piece on Turing's Dots for the volume. Alan Turing: His Work and Impact received the R. R. Hawkins Award, the top award for professional and scholarly publishing across all the arts and sciences.
Barry Cooper was also the driving force behind Computability in Europe, an eclectic meeting to discuss all things related to computation.
I hope Barry now gets to meet his hero Turing but we'll miss him greatly down here.
Monday, October 26, 2015
A human-readable proof that every planar graph is 4.5-colorable
About 20 years ago I went to a talk by Jim Propp on the fractional chromatic number of a graph. (Here is a link to a free legal copy of the book on fractional graph theory by Ed Scheinerman and Dan Ullman.)
Let c be a natural number. A graph G=(V,E) is c-colorable if there is a mapping of the vertices of G to the vertices of Kc such that (x,y) ∈ E --> (x,y) is an edge in Kc
Let Kt:k be the Knesser graph: the vertices are all k-element subsets of {1,...,t}, (A,B) is an edge if A∩B=∅. A graph is t/k-colorable if there is a mapping of the vertices of G to the vertices of Kt:k such that (x,y) ∈ E --> (x,y) is an edge in Kt:k. (That is not quite right as that might depend on if you use, say, 15/10 or 3/2 but is close. See page 40-42 in the book linked to above for the formal definition.)
One can also show that this is equivalent to other definitions of fractional chromatic number (one involves a relaxation of an LP problem).
The big open problem in the field and perhaps the motivation for it was this:
Known and the proof is human-readable: Every Planar Graph is 5-colorable.
Known and the current proof is NOT human-readable: Every Planar Graph is 4-colorable.
OPEN: Is there some number 4<c<5 such that every planar graph is c-colorable and the proof is human-readable?
I had not thought about this problem for a long time when I was invited by Dan Cranston to give a talk at a Discrete Math Seminar in Virginia Commonwealth University. Looking over the prior talks in the seminar I noticed a talk on Fractional Chromatic Number of the Plane by Dan Cranston. I inquired:
BILL: Fractional colorings! Has there been progress on finding a number less than 5 so that the proof that every planar graph is c colorable is reasonable?
DAN: Yes. By myself and Landon Rabern. We have shown that every planar graph is 4.5-colorable in this paper! I've given talks on it, here are my slides.
BILL: This is big news! I am surprised I haven't heard about it.
DAN: The paper is on arXiv but not published yet. Also, while you and me think its a great result, since its already known that all planar graphs are 4-colorable, some people are not interested.
BILL: Too bad you didn't prove it in 1975 (the four-color theorem was proven in 1976).
DAN: I was in kindergarten.
BILL: Were you good at math?
DAN: My finger paintings were fractional colorings of the plane and I never used more than 4.5 colors.
DAN: By the way, the paper I spoke about in seminar was about fractional colorings of the plane. The graph is: all points in the plane are vertices, and two points have an edge if they are an inch apart. The ordinary chromatic number is known to be between 4 and 7. Tighter bounds are known for the fractional chromatic number, see the paper linked to above when you first mention that paper.
Some thoughts:
1) I would have thought that they would first get something like 4.997 and then whittle it down. No, it went from 5 right to 4.5. Reducing it any further looks hard and they proved that it cannot be a tweak of their proof.
2) The paper is readable. Its very clever and doesn't really use anything not known in 1975. But the problem wasn't known then and of course the right combination of ideas would have been hard to find back the. In particular, the method of discharging is better understood now.
3) When I told this to Clyde Kruskal he wanted to know if there was a rigorous definition of ``human-readable'' since the open question asked for a human-readable proof. I doubt there is or that there can be, but this paper clearly qualifies. Perhaps a paper is human-readable if 2 humans have actually read it and understood it. Or perhaps you can parameterize is and have n-human-readable for n humans.
4) Why does my spell checker thing colorable and colorings and parameterize are not words?
5) The serendipity: I'm very happy to know the result. I came upon it by complete accident. I'm bothered that there may be other results I want to know about but don't. How does one keep up? One way is to check arXiv's once-a-week or on some regular basis. But is that a good use of your time? I ask non-rhetorical.
Let c be a natural number. A graph G=(V,E) is c-colorable if there is a mapping of the vertices of G to the vertices of Kc such that (x,y) ∈ E --> (x,y) is an edge in Kc
Let Kt:k be the Knesser graph: the vertices are all k-element subsets of {1,...,t}, (A,B) is an edge if A∩B=∅. A graph is t/k-colorable if there is a mapping of the vertices of G to the vertices of Kt:k such that (x,y) ∈ E --> (x,y) is an edge in Kt:k. (That is not quite right as that might depend on if you use, say, 15/10 or 3/2 but is close. See page 40-42 in the book linked to above for the formal definition.)
One can also show that this is equivalent to other definitions of fractional chromatic number (one involves a relaxation of an LP problem).
The big open problem in the field and perhaps the motivation for it was this:
Known and the proof is human-readable: Every Planar Graph is 5-colorable.
Known and the current proof is NOT human-readable: Every Planar Graph is 4-colorable.
OPEN: Is there some number 4<c<5 such that every planar graph is c-colorable and the proof is human-readable?
I had not thought about this problem for a long time when I was invited by Dan Cranston to give a talk at a Discrete Math Seminar in Virginia Commonwealth University. Looking over the prior talks in the seminar I noticed a talk on Fractional Chromatic Number of the Plane by Dan Cranston. I inquired:
BILL: Fractional colorings! Has there been progress on finding a number less than 5 so that the proof that every planar graph is c colorable is reasonable?
DAN: Yes. By myself and Landon Rabern. We have shown that every planar graph is 4.5-colorable in this paper! I've given talks on it, here are my slides.
BILL: This is big news! I am surprised I haven't heard about it.
DAN: The paper is on arXiv but not published yet. Also, while you and me think its a great result, since its already known that all planar graphs are 4-colorable, some people are not interested.
BILL: Too bad you didn't prove it in 1975 (the four-color theorem was proven in 1976).
DAN: I was in kindergarten.
BILL: Were you good at math?
DAN: My finger paintings were fractional colorings of the plane and I never used more than 4.5 colors.
DAN: By the way, the paper I spoke about in seminar was about fractional colorings of the plane. The graph is: all points in the plane are vertices, and two points have an edge if they are an inch apart. The ordinary chromatic number is known to be between 4 and 7. Tighter bounds are known for the fractional chromatic number, see the paper linked to above when you first mention that paper.
Some thoughts:
1) I would have thought that they would first get something like 4.997 and then whittle it down. No, it went from 5 right to 4.5. Reducing it any further looks hard and they proved that it cannot be a tweak of their proof.
2) The paper is readable. Its very clever and doesn't really use anything not known in 1975. But the problem wasn't known then and of course the right combination of ideas would have been hard to find back the. In particular, the method of discharging is better understood now.
3) When I told this to Clyde Kruskal he wanted to know if there was a rigorous definition of ``human-readable'' since the open question asked for a human-readable proof. I doubt there is or that there can be, but this paper clearly qualifies. Perhaps a paper is human-readable if 2 humans have actually read it and understood it. Or perhaps you can parameterize is and have n-human-readable for n humans.
4) Why does my spell checker thing colorable and colorings and parameterize are not words?
5) The serendipity: I'm very happy to know the result. I came upon it by complete accident. I'm bothered that there may be other results I want to know about but don't. How does one keep up? One way is to check arXiv's once-a-week or on some regular basis. But is that a good use of your time? I ask non-rhetorical.
Thursday, October 22, 2015
Complexity Blast from the Past
My friend Marty in grad school mysteriously disappeared in 1985 and showed up yesterday looking exactly the same as I remember him. He said he traveled into my time and asked me how we finally proved P different than NP. I didn't give him the answer he seeked but I let him look around at what complexity has done in the past thirty years. He read through the night and agreed to share his thoughts on this blog.
When I left in '85 Yao and Håstad had just showed that Parity require exponential-size circuits and Razborov showed clique does not have poly-size monotone circuits. Looks like showing some NP problem didn't have poly-size circuits was right around corner so I'm a bit disappointed that didn't happen. I would have thought you would have proven NP is not computable in log space (nope), NP not computable by Boolean formulae (nope) or NP not computable by constant depth circuits with threshold gates (nope). All I see are excuses like "natural proofs". You did show lower bounds for constant depth circuits with modm gates for m prime but that happened back just a year after I left. For m composite you need to go all the way to NEXP and even that took another 25 years.
Back in 1984 we had a 3n lower circuit bound for an explicit function and surely you have much better bounds now, at least quadratic. I almost didn't find any improvement until I saw a paper written last week that improves this lower bound to a whopping 3.011n. You can't make this stuff up.
I shouldn't be that hard on you complexity theorists. All that work on interactive proofs, PCPs, approximations and unique games is pretty cool. Some real impressive stuff on pseudorandom generators and error-correcting codes. Who would have thunk that NL = co-NL or that the entire polynomial-time hierarchy reduces to counting? Nice to see you finally proved that Primes in P and undirected graph connectivity in log-space.
Oh no! Biff stole Lance's copy of Arora-Barak and is taking it back in time. This could change computational complexity forever. I'd better go find Doc Brown and make things right.
[Lance's Note: Thanks to Mostafa Ammar for inspiring this post.]
When I left in '85 Yao and Håstad had just showed that Parity require exponential-size circuits and Razborov showed clique does not have poly-size monotone circuits. Looks like showing some NP problem didn't have poly-size circuits was right around corner so I'm a bit disappointed that didn't happen. I would have thought you would have proven NP is not computable in log space (nope), NP not computable by Boolean formulae (nope) or NP not computable by constant depth circuits with threshold gates (nope). All I see are excuses like "natural proofs". You did show lower bounds for constant depth circuits with modm gates for m prime but that happened back just a year after I left. For m composite you need to go all the way to NEXP and even that took another 25 years.
Back in 1984 we had a 3n lower circuit bound for an explicit function and surely you have much better bounds now, at least quadratic. I almost didn't find any improvement until I saw a paper written last week that improves this lower bound to a whopping 3.011n. You can't make this stuff up.
I shouldn't be that hard on you complexity theorists. All that work on interactive proofs, PCPs, approximations and unique games is pretty cool. Some real impressive stuff on pseudorandom generators and error-correcting codes. Who would have thunk that NL = co-NL or that the entire polynomial-time hierarchy reduces to counting? Nice to see you finally proved that Primes in P and undirected graph connectivity in log-space.
Oh no! Biff stole Lance's copy of Arora-Barak and is taking it back in time. This could change computational complexity forever. I'd better go find Doc Brown and make things right.
[Lance's Note: Thanks to Mostafa Ammar for inspiring this post.]
Sunday, October 18, 2015
Amazon going after fake reviews but mine is still posted
I have always wondered why YELP and Amazon Reviews and other review sites work as well as they do since a company COULD flood one with false reviews (high marks for their company or low marks for their competitors). From what I've seen this does not seem to be a big problem.
Even so, it is A problem, and amazon is taking action on it. See here for details.
A while back I posted a review that is... not sure its false, but I am surprised they allowed it and still allow it. What happened: I bought a copy of my own book Bounded Queries in Recursion Theory (Gasarch and Martin) since my dept wanted a copy to put in one of those glass cases of faculty books. I bought it used for about $10.00. I got email from amazon asking if I wanted to review it, so I thought YES, I'll review my book! I wrote:
This is a great book. I should know, I wrote it.
I am surprised Amazon ASKED me for my opinion.
Seriously- I also have a survey which says whats in the book
better, email me if you want it, gasarch@cs.umd.edu
That was in November of 2014. Its still posted. Is it a false review? Not clear since everything I say in it is true and I am honestly saying who I am.
If Amazon removed it I would be surprised they noticed.
If Amazon does not remove it I'm surprised they allow it.
Is it possible to be surprised both ways?
Even so, it is A problem, and amazon is taking action on it. See here for details.
A while back I posted a review that is... not sure its false, but I am surprised they allowed it and still allow it. What happened: I bought a copy of my own book Bounded Queries in Recursion Theory (Gasarch and Martin) since my dept wanted a copy to put in one of those glass cases of faculty books. I bought it used for about $10.00. I got email from amazon asking if I wanted to review it, so I thought YES, I'll review my book! I wrote:
This is a great book. I should know, I wrote it.
I am surprised Amazon ASKED me for my opinion.
Seriously- I also have a survey which says whats in the book
better, email me if you want it, gasarch@cs.umd.edu
That was in November of 2014. Its still posted. Is it a false review? Not clear since everything I say in it is true and I am honestly saying who I am.
If Amazon removed it I would be surprised they noticed.
If Amazon does not remove it I'm surprised they allow it.
Is it possible to be surprised both ways?
Thursday, October 15, 2015
2015 Fall Jobs Post
When the 2014 fall jobs post is our most popular post, you know it is time for the 2015 jobs post. This year instead of (or in addition to) posting your jobs in the comments, post your theory jobs to Theoretical Computer Science Jobs site coordinated by Boaz Barak.
For job searchers the standard official sites for CS faculty jobs are the the CRA and the ACM. Check out postdocs and other opportunities on the TCS Jobs site mentioned above and Theory Announcements. It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up.
Many if not most computer science department should be trying to expand again this year to keep up with ever growing enrollments. Hiring in theoretical computer science is harder to predict, not likely to be a priority in many departments. You can make yourself more valuable by showing a willingness to participate and teach beyond core theory. Machine learning, data science and information security are areas of great need where theorists can play a large role.
Don't ignore postdoc and lecturer positions that will give you some extra training and a stronger CV for future searchers. Think global--there are growing theory groups around the world.
Good luck out there and I look forward to seeing your names on the Spring 2016 jobs post.
For job searchers the standard official sites for CS faculty jobs are the the CRA and the ACM. Check out postdocs and other opportunities on the TCS Jobs site mentioned above and Theory Announcements. It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up.
Many if not most computer science department should be trying to expand again this year to keep up with ever growing enrollments. Hiring in theoretical computer science is harder to predict, not likely to be a priority in many departments. You can make yourself more valuable by showing a willingness to participate and teach beyond core theory. Machine learning, data science and information security are areas of great need where theorists can play a large role.
Don't ignore postdoc and lecturer positions that will give you some extra training and a stronger CV for future searchers. Think global--there are growing theory groups around the world.
Good luck out there and I look forward to seeing your names on the Spring 2016 jobs post.
Sunday, October 11, 2015
Moore's Law of Code Optimization
An early version of Moore's law is as follows:
Moore's law is often used to refer to the following:
There are other versions as well. I've heard that some versions of it are no longer working (it couldn't go on forever). But what about the gains made NOT by hardware? Is there a Moore's Law of Code Optimization?
There is! Its called Proebstring's law
The paper pointed to gives evidence for this law.
So is Code Opt worth it? I've heard the following:
1) Some of the Code Opts eventually go into the hardware, so its not quite fair to say that Code Opts improve speed that slowly.
2) Any improvement is worth having.
3) Being forced to think about such issues leads to other benefits.
HARDWARE: The number of transistors on an integrated circuits doubles every 18 MONTHS.
Moore's law is often used to refer to the following:
SPEED: The speed of computers due to hardware doubles every 18 MONTHS.
There are other versions as well. I've heard that some versions of it are no longer working (it couldn't go on forever). But what about the gains made NOT by hardware? Is there a Moore's Law of Code Optimization?
There is! Its called Proebstring's law
SPEED: The speed of computers due to code opt doubles every 18 YEARS.
The paper pointed to gives evidence for this law.
So is Code Opt worth it? I've heard the following:
1) Some of the Code Opts eventually go into the hardware, so its not quite fair to say that Code Opts improve speed that slowly.
2) Any improvement is worth having.
3) Being forced to think about such issues leads to other benefits.
Wednesday, October 07, 2015
Randomness by Complexity
Let n be the following number
135066410865995223349603216278805969938881475605667027524485143851526510604859533833940287150571909441798207282164471551373680419703964191743046496589274256239341020864383202110372958725762358509643110564073501508187510676594629205563685529475213500852879416377328533906109750544334999811150056977236890927563 (RSA 1024)
What is the probability that the fifth least significant bit of the smallest prime factor of n is a one? This bit is fully defined--the probability is either one or zero. But if gave me better than 50/50 odds one way or the other I would take the bet, unless you worked for RSA Labs.
How does this jibe with a Frequentist or Bayesian view of probability? No matter how often you factor the number you will always get the same answer. No matter what randomized process was used to generate the original primes, conditioned on n being the number above, the bit is determined.
Whether we flip a coin, shuffle cards, choose lottery balls or use a PRG, we are not creating truly random bits, just a complex process who unpredictability is,we hope, indistinguishable from true randomness. We know from Impagliazzo-Wigderson, and its many predecessors and successors, that any sufficiently complex process can be converted to a PRG indistinguishable from true randomness. Kolmogorov complexity tells us we can treat a single string with no short description as a "random string".
That's how we ground randomness in complexity: Randomness is not some distribution over something not determined, just something we cannot determine.
135066410865995223349603216278805969938881475605667027524485143851526510604859533833940287150571909441798207282164471551373680419703964191743046496589274256239341020864383202110372958725762358509643110564073501508187510676594629205563685529475213500852879416377328533906109750544334999811150056977236890927563 (RSA 1024)
What is the probability that the fifth least significant bit of the smallest prime factor of n is a one? This bit is fully defined--the probability is either one or zero. But if gave me better than 50/50 odds one way or the other I would take the bet, unless you worked for RSA Labs.
How does this jibe with a Frequentist or Bayesian view of probability? No matter how often you factor the number you will always get the same answer. No matter what randomized process was used to generate the original primes, conditioned on n being the number above, the bit is determined.
Whether we flip a coin, shuffle cards, choose lottery balls or use a PRG, we are not creating truly random bits, just a complex process who unpredictability is,we hope, indistinguishable from true randomness. We know from Impagliazzo-Wigderson, and its many predecessors and successors, that any sufficiently complex process can be converted to a PRG indistinguishable from true randomness. Kolmogorov complexity tells us we can treat a single string with no short description as a "random string".
That's how we ground randomness in complexity: Randomness is not some distribution over something not determined, just something we cannot determine.
Monday, October 05, 2015
Is Kim Davis also against Nonconstrutive proofs?
Recall that Kim Davis is the Kentucky clerk who refused to issue marriage licenses for same-sex couples and was cheered on by Mike Huckabee and other Republican candidates for Prez. Had she refused to issue marriage license for couples that had been previously divorced than Mike Huckabee would not be supporting her, and the Pope wouldn't have a private 15 minute meeting with her telling her to stay strong (NOTE- I wrote this post in that tiny window of time when it was believed she did have such a meeting with the Pope, which is not true. The Pope DID meet with a former student of his who is gay, and that studen'ts partner.) Had she refused to issue marriage licenses to inter-racial couples (this did happen in the years after the Supreme court said that states could not ban interracial marriage, Loving vs Virginia, 1967 ) then ... hmm, I'm curious what would happen. Suffie to say that Mike H and the others would prob not be supporting her.
David Hilbert solved Gordon's problem using methods that were nonconstructive in (I think) the 1890's. This was considered controversial and Gordon famously said this is not math, this is theology. Had someone else solved this problem in 1990 then the fact that the proof is non-constructive might be noted, and the desire for a constructive proof might have been stated, but nobody would think the proof was merely theology.
I don't think the Prob Method was ever controversial; however, it was originally not used much and a paper might highlight its use in the title or abstract. Now its used so often that it would be unusual to point to it as a novel part of a paper. If I maintained a website of Uses of the Prob Method in Computer Science then it would be very hard to maintain since papers use it without commentary.
The same is becoming true of Ramsey Theory. I DO maintain a website of apps of Ramsey Theory to CS (see here) and its geting harder to maintain since using Ramsey Theory is not quite so novel as to be worth a mention.
SO- when does a math technique (e.g., prob method) or a social attuitude (e.g., acceptance of same-sex marriage) cross a threshold where its no longer controversial? Or no longer novel? How can you tell? Is it sudden or gradual? Comment on other examples from Math! CS!
Thursday, October 01, 2015
Cancer Sucks
Karsten Schwan said the title quote when we were gathered as a faculty two years ago mourning the Georgia Tech School of Computer Science faculty member Mary Jean Harrold who died from the disease. Karsten just lost his own battle with cancer monday morning and my department is now mourning another great faculty member.
Just a few months ago, Alberto Apostolico, an algorithms professor at Georgia Tech, also passed away from cancer.
Just a few months ago, Alberto Apostolico, an algorithms professor at Georgia Tech, also passed away from cancer.
I went back through the obituaries in the blog and we lost quite a few to cancer, often way too young, including Mihai Pătraşcu, Benoît Mandelbrot, Partha Niyogi, Ingo Wegener, Clemens Lautemann and Carl Smith. I just taught Lautemann's proof that BPP is in Σp2∩Πp2 in class yesterday.
With apologies to Einstein, God does play dice with people's lives, taking them at random in this cruel way. Maybe someday we'll find a way to cure or mitigate this terrible disease but for now all I can say is Cancer Sucks.
With apologies to Einstein, God does play dice with people's lives, taking them at random in this cruel way. Maybe someday we'll find a way to cure or mitigate this terrible disease but for now all I can say is Cancer Sucks.
Monday, September 28, 2015
Venn Diagrams are used (yeah) but abused (boo)
In a prior blog entry I speculated about which math phrases will enter the English Language and will they be used correctly. I thought Prisoner's Dilemma would enter and be used correctly, but Turing Test and Venn Diagram would not enter.
Since then I've seen Turing Test used, but only because of the movie The Imitation Game. I don't think it will enter the human language until a computer actually passes it for real, which might not be for a while. See this excellent post by Scott Aaronson about a recent bogus claim that a machine passed the Turing Test.
Venn Diagrams seem to be used more (Yeah!) but incorrectly (Boo!)
1) In this article (which inspired this post), about who might replace John Boehner as speaker of the house, there is the following passage:
Option 3: An acceptable and respected conservative like Jeb Hensarling or
Tom Price emerges as speaker. Why these two? First, Paul Ryan doesn’t seem
to want the gig, so that leaves us with only a few options for someone who
fits in the Venn diagram of being enough of an outsider, well liked, and
sufficiently conservative.
Is this correct use? They really mean the intersection of oustider, well-liked,
and suff conservative. One can picture it and it sort of makes sense, but its not
quite correct mathematically.
2) In this ad for Venn Beer (in celebration of John Venn's 180th birthday!) they really mean union, not intersection.
3) This Venn Diagram about Vladmir Putin's and your Aunt's record collection doesn't really make sense but I know what they mean and its funny.
4) This Venn Diagram about how to woo women is incorrect, not funny, not mathematically meaningful.
5) This Venn Diagram involved Doctors, Prostitutes, and TSA agents. At first it is funny and seems to make sense. But then you realize that the intersection of Doctors and Prostitutes is NOT People who make more per hours than you make all day, its actually prostitutes with medical degrees. Its still funny and I see what they are getting at.
6) This Venn Diagram (it's later in the article) of Republican Candidates for the 2016 nomination for Prez is correct for the math and informative, though one may disagree with some of it (Is Trump really Tea-Party or should he be in his own category of Trumpness?)
Thursday, September 24, 2015
21st Century Problems
My youngest daughter, Molly, a high school senior talking colleges with a woman about ten years her senior. The woman remembered all her friends watching the clock so they could go home to check their emails to see if they were accepted. Molly said "Sheesh, you had to go home to check email?"
My other daughter Annie, a college junior, went on an overnight last Thursday to a place without cell phone reception. She spent Friday night with her friends in her class catching up on emails, texts and Facebook messages.
Now back in my day (that being the early 80's) we got our college acceptances and rejections by postal mail, where that one crucial bit of information could be determined by the thickness of the envelope. Some of my friends had their mail held by the post office so they could find out a few hours earlier.
In college I did have email access, in fact I wrote an email system for Cornell. But most students didn't use email so we resorted to other means. Student organizations could hire a service that put posters on key locations throughout campus. Chalk on sidewalks worked particularly well. The personals section of the Cornell Daily Sun served as a campus bulletin board. In my freshman dorm we had phones in our rooms but no answering machines. We did put little whiteboards on our doors so people could leave us messages. We had a lounge on our floor where you could find most people and talk to them in person. You young people should try that more often.
We had to coordinate activities and meeting places ahead of time, if someone was late you waited for them. On the other hand I never had to spend my Friday nights catching up on emails and texts.
My other daughter Annie, a college junior, went on an overnight last Thursday to a place without cell phone reception. She spent Friday night with her friends in her class catching up on emails, texts and Facebook messages.
Now back in my day (that being the early 80's) we got our college acceptances and rejections by postal mail, where that one crucial bit of information could be determined by the thickness of the envelope. Some of my friends had their mail held by the post office so they could find out a few hours earlier.
In college I did have email access, in fact I wrote an email system for Cornell. But most students didn't use email so we resorted to other means. Student organizations could hire a service that put posters on key locations throughout campus. Chalk on sidewalks worked particularly well. The personals section of the Cornell Daily Sun served as a campus bulletin board. In my freshman dorm we had phones in our rooms but no answering machines. We did put little whiteboards on our doors so people could leave us messages. We had a lounge on our floor where you could find most people and talk to them in person. You young people should try that more often.
We had to coordinate activities and meeting places ahead of time, if someone was late you waited for them. On the other hand I never had to spend my Friday nights catching up on emails and texts.
Monday, September 21, 2015
When did Mathematicians realize that Fermat did not have a proof of FLT?
I recently came across the following passage which is about Fermat's Last Theorem (FLT).
Pierre de Fermat had found a proof, but he did not bother to write it down. This is perhaps the most frustrating note in the history of mathematics, particularly as Fermat took his secret to the grave.
AH- so at one time people thought Fermat DID have a proof of FLT. That is, a proof using just the math of his time, likely a very clever proof. I doubt anyone thinks that Fermat had a proof in this day and age. Actually it has been in fiction: in a 2010 episode Dr. Who episode The eleventh hour, the doctor has to prove to some very smart people that they should take his advice. He does this by showing them Fermat's proof of FLT. Good Science Fiction but highly unlikely as Science Fact. In an episode of ST-TNG (Title: The Royale. Year: 1989) it is claimed that FLT is still open. Whoops. But in an episode of ST-DSN (Title: Facets. Year: 1995) they refer to `Wiles proof of FLT'.
Wikipedia states: It is not known whether Fermat had actually found a valid proof for all exponents n, but it appears unlikely. I think that understates the case.
So here is a question for all you math historians out there: When did the math community realize that FLT was really hard?
We have one clue- the quote I began with. Its from... 2013. Whoops. The book is The Simpsons and their mathematical secrets by Simon Singh (Author of Fermat's Enigma which is about the quest to proof FLT). I've read the passages about FLT in the Simpsons book over again to make sure he doesn't someplace say that Fermat prob didn't have a proof. No--- he seems to really say the Fermat had a proof. So whats going on here? Possibilities:
1) I'm wrong. There are serious credible people who think Fermat had a proof and he talked to them; perhaps while working Fermat's Enigma. I find this unlikely- I have never, literally never, heard of anyone, not even math cranks, who think Fermat had a simple proof. Some cranks think THEY have a simple proof, though even that seems far less common after FLT was proven.
2) I'm right. He didn't have anyone who was both serious and credible check his book. I find this unlikely. He has written Fermat's 't Enigma so surely he is in contact with people that are both credible and serious.
3) He did have someone check his book but thought the story was better the way he told it. (This was common on the TV show Numb3rs which never let a mathematical truth get in the way of a good story.)
I find this unlikely since a better way to say it is we'll never know if Fermat had a proof!
One problem with such mistakes is that it destroys his credibility on other math things he writes of. That info about Dr. Who I got from the book, but I suspect its correct. And the stuff about the math that appears in the Simpsons is checkable and seems correct. I give one example: in one episode you see in the background
398712 + 436512= 447212
which is correct on a calculator with only 10 digits of precision. Cute! But I have stopped reading any of the math history in the book for fear that I will get incorrect notions in my head.
However, back to my original question: Was there a time when people thought Fermat really had a proof?Was there a time when people thought there was an easy proof? When did that change?
Pierre de Fermat had found a proof, but he did not bother to write it down. This is perhaps the most frustrating note in the history of mathematics, particularly as Fermat took his secret to the grave.
AH- so at one time people thought Fermat DID have a proof of FLT. That is, a proof using just the math of his time, likely a very clever proof. I doubt anyone thinks that Fermat had a proof in this day and age. Actually it has been in fiction: in a 2010 episode Dr. Who episode The eleventh hour, the doctor has to prove to some very smart people that they should take his advice. He does this by showing them Fermat's proof of FLT. Good Science Fiction but highly unlikely as Science Fact. In an episode of ST-TNG (Title: The Royale. Year: 1989) it is claimed that FLT is still open. Whoops. But in an episode of ST-DSN (Title: Facets. Year: 1995) they refer to `Wiles proof of FLT'.
Wikipedia states: It is not known whether Fermat had actually found a valid proof for all exponents n, but it appears unlikely. I think that understates the case.
So here is a question for all you math historians out there: When did the math community realize that FLT was really hard?
We have one clue- the quote I began with. Its from... 2013. Whoops. The book is The Simpsons and their mathematical secrets by Simon Singh (Author of Fermat's Enigma which is about the quest to proof FLT). I've read the passages about FLT in the Simpsons book over again to make sure he doesn't someplace say that Fermat prob didn't have a proof. No--- he seems to really say the Fermat had a proof. So whats going on here? Possibilities:
1) I'm wrong. There are serious credible people who think Fermat had a proof and he talked to them; perhaps while working Fermat's Enigma. I find this unlikely- I have never, literally never, heard of anyone, not even math cranks, who think Fermat had a simple proof. Some cranks think THEY have a simple proof, though even that seems far less common after FLT was proven.
2) I'm right. He didn't have anyone who was both serious and credible check his book. I find this unlikely. He has written Fermat's 't Enigma so surely he is in contact with people that are both credible and serious.
3) He did have someone check his book but thought the story was better the way he told it. (This was common on the TV show Numb3rs which never let a mathematical truth get in the way of a good story.)
I find this unlikely since a better way to say it is we'll never know if Fermat had a proof!
One problem with such mistakes is that it destroys his credibility on other math things he writes of. That info about Dr. Who I got from the book, but I suspect its correct. And the stuff about the math that appears in the Simpsons is checkable and seems correct. I give one example: in one episode you see in the background
398712 + 436512= 447212
which is correct on a calculator with only 10 digits of precision. Cute! But I have stopped reading any of the math history in the book for fear that I will get incorrect notions in my head.
However, back to my original question: Was there a time when people thought Fermat really had a proof?Was there a time when people thought there was an easy proof? When did that change?
Thursday, September 17, 2015
The Theorems Conference
All too often theoretical computer scientists get more obsessed by proofs than the theorems themselves. I suggest a theorems conference. Here's how it would work:
Authors would submit two versions of a paper. One has the statement of the theorem and why that theorem matters, but no proofs. The second version includes the proofs.
The program committee first makes tentative decisions based on the first version of the paper. If tentatively accepted the PC then looks at the second version. The PC can reject the paper if the the proofs have significant flaws, gaps or readability issues. The PC cannot reject the paper for any other aspect of the proof such as length or lack of technical depth or originality.
This way we truly judge papers based on what they prove--what the results add to the knowledge base of the field.
Of course my plan has many flaws. Some papers with their proofs may have already been posted on archive sites which the PC members could have seen. More likely, the PC will guess the difficulty of the proof and judge the paper based on this perceived difficulty, and not on the theorem itself.
We need a culture shift, away from an emphasis on proofs. That's the only way we can judge our results for the results themselves.
Authors would submit two versions of a paper. One has the statement of the theorem and why that theorem matters, but no proofs. The second version includes the proofs.
The program committee first makes tentative decisions based on the first version of the paper. If tentatively accepted the PC then looks at the second version. The PC can reject the paper if the the proofs have significant flaws, gaps or readability issues. The PC cannot reject the paper for any other aspect of the proof such as length or lack of technical depth or originality.
This way we truly judge papers based on what they prove--what the results add to the knowledge base of the field.
Of course my plan has many flaws. Some papers with their proofs may have already been posted on archive sites which the PC members could have seen. More likely, the PC will guess the difficulty of the proof and judge the paper based on this perceived difficulty, and not on the theorem itself.
We need a culture shift, away from an emphasis on proofs. That's the only way we can judge our results for the results themselves.
Sunday, September 13, 2015
An Open(?) Question about Prime Producting Polynomials Part II in 3-D!
(Apologies- No, this post is not in 3-D)
I posted last week about An Open(?) Question about Prime Producing Polynomails
I got several emails about the post with more information, inspiring this post!
All the math in this post can be found in my writeup Polynomials and Primes,
unless otherwise specified. NONE of the results are mine.
You can read this post without reading the prior one.
1) KNOWN: Let f(x) ∈ Z[x] be a poly of degree d. Then there exists a non prime in f(x)'s image (actually there are an infinite number of non primes, in fact there are an infinite number of composites). If f(1) is prime then of f(1+mf(1)) as m=0,1,2,... at most 2d-2 of them are prime.
2) Algorithm to find an x such that f(x) is not prime: compute f(1), f(1+f(1)),...,f(1+(2d-2)f(1)) until you find one that is not prime. This takes 2d-1 evals. OPEN(?): Is there a better deterministic algorithm where we measure complexity by number of evals? Since this is a simple model of computation lower bounds might be possible.
3) There is the following randomized algorithm: Eval f(1)- if its not prime you're done. If f(1) is prime then pick a random m where 0≤ m ≤ (2d-2)2 and eval f(1+mf(1)). This is non prime with prob 1- (1/(2d-1)).
4) What is it about Z that makes this theorem true? In my write up I show that if D is an integral domain with only a finite number of units (numbers that have mult inverses) then any poly in D[x] has to produce non-primes infinitely often. (A prime in D is a number a such that if a=bc then either b or c is a unit.)
5) What about if D has an infinite number of units? See this paper for examples of polynomials over integral domains D such that the poly only takes on only prime or unit values.
6) What about polys over Q? over R? over C? In my write up I prove similar theorems for Q and then use that to get theorems for C.
7) Looking at polys in Z[x,y] is much harder, see this survey.
8) If f(x)∈Z[x] is a poly then does there exist a prime in f(x)'s image? An infinite number of primes? Easy but stupid answer is no: f(x)=2x. Better question: assume that f(x)'s coefficients are rel prime.
Dirichlet's theorem: if GCD(a,b)=1 then ax+b is prime infinitely often.
Open: is x2 + 1 prime infinitely often?
I posted last week about An Open(?) Question about Prime Producing Polynomails
I got several emails about the post with more information, inspiring this post!
All the math in this post can be found in my writeup Polynomials and Primes,
unless otherwise specified. NONE of the results are mine.
You can read this post without reading the prior one.
1) KNOWN: Let f(x) ∈ Z[x] be a poly of degree d. Then there exists a non prime in f(x)'s image (actually there are an infinite number of non primes, in fact there are an infinite number of composites). If f(1) is prime then of f(1+mf(1)) as m=0,1,2,... at most 2d-2 of them are prime.
2) Algorithm to find an x such that f(x) is not prime: compute f(1), f(1+f(1)),...,f(1+(2d-2)f(1)) until you find one that is not prime. This takes 2d-1 evals. OPEN(?): Is there a better deterministic algorithm where we measure complexity by number of evals? Since this is a simple model of computation lower bounds might be possible.
3) There is the following randomized algorithm: Eval f(1)- if its not prime you're done. If f(1) is prime then pick a random m where 0≤ m ≤ (2d-2)2 and eval f(1+mf(1)). This is non prime with prob 1- (1/(2d-1)).
4) What is it about Z that makes this theorem true? In my write up I show that if D is an integral domain with only a finite number of units (numbers that have mult inverses) then any poly in D[x] has to produce non-primes infinitely often. (A prime in D is a number a such that if a=bc then either b or c is a unit.)
5) What about if D has an infinite number of units? See this paper for examples of polynomials over integral domains D such that the poly only takes on only prime or unit values.
6) What about polys over Q? over R? over C? In my write up I prove similar theorems for Q and then use that to get theorems for C.
7) Looking at polys in Z[x,y] is much harder, see this survey.
8) If f(x)∈Z[x] is a poly then does there exist a prime in f(x)'s image? An infinite number of primes? Easy but stupid answer is no: f(x)=2x. Better question: assume that f(x)'s coefficients are rel prime.
Dirichlet's theorem: if GCD(a,b)=1 then ax+b is prime infinitely often.
Open: is x2 + 1 prime infinitely often?
Thursday, September 10, 2015
Designer Chips
A computer architecture researcher talked to me about a role theoretical computer science can play for them: creating a new kind of computer processor. Microprocessors stopped getting faster a decade ago due to energy challenges so computer architects look for new ways to improve performance, moving away from the general-purpose CPU towards processors that handle more specific functions. The GPU, Graphics Processor Unit, has long been around to handle the graphics-intensive needs of modern computers and many have used GPUs for other purposes such as machine learning. These days we can program chips using FPGAs (Field-programming gate arrays) and are nearly at the point of cheaply compiling directly to hardware. How does this change the theory we do?
What kind of specialized chips would speed up our algorithms? If we want to find matchings on graphs, for example, is there some routine one could put in a chip that would lead to a much more efficient algorithm?
On the complexity side, how do we model a computer where we can program the hardware as well as the software? What are the right resource bounds and tradeoffs?
In general our notions of computers are changing, now with multi-core, cloud computing and designer chips. Not only should we focus on applying theory to these new models of computing, but we should think about what future changes in computing could yield more efficient algorithms. Theorists should be involved in planning the future of computing and we're not even doing a great job reacting to changes around us.
What kind of specialized chips would speed up our algorithms? If we want to find matchings on graphs, for example, is there some routine one could put in a chip that would lead to a much more efficient algorithm?
On the complexity side, how do we model a computer where we can program the hardware as well as the software? What are the right resource bounds and tradeoffs?
In general our notions of computers are changing, now with multi-core, cloud computing and designer chips. Not only should we focus on applying theory to these new models of computing, but we should think about what future changes in computing could yield more efficient algorithms. Theorists should be involved in planning the future of computing and we're not even doing a great job reacting to changes around us.
Monday, September 07, 2015
An Open(?) question about prime-producing-polynomials
Known Theorem: If f(x)∈ Z[x] is prime for all nat number inputs then f(x) is a constant.
NOTE- Recall that if p is a prime then so is -p.
Known Proof: Assume f(x) has degree d. f(1) IS prime. Let f(1)=p. Look at
f(1+p), f(1+2p),...,f(1+(2d+1)p).
One can easily show that p divides all of these. Hence if they are all primes then they must all be p or -p. Since there are 2d+1 of them, at least d+1 of them are the same, say p. Hence f is the constant p.
END of known proof.
Note that the proof gives the following theorem:
Let f(x)∈ Z[x] of degree d. We assume f(1)≥ 0. Least a st f(a) is NOT prime is ≤ 1+(2d+1)p.
(This can prob be improved a bit with some cases, but its good enough for now.)
Recall Euler's poly x2-x+41 produces primes for x=0,...,40. But at 41 you get a composite. This is much smaller than the upper bound 1+(2d+1)p = 1+5*41=206.
Wolfram MathWorld has a page of other polys in Z[x] that produces lots of primes initially, but NONE come close to the bound.
QUESTIONS:
Proof a better upper bound.
Proof a better lower bound (Fix d and produce and infinite seq of polys of degree d...)
Close the gap!
If this is already known, then let me know please.
Can also ask for polys in Q[x], R[x], C[x]. For Q[x] and R[x] same theorem is true- no poly can produce all primes. I suspect also true for C[x] but I haven't seen it stated anywhere. (ADDED LATER- Proof for C[x] is easy. First proof
for Q[x] and then by Lagrange interpoloation if a poly has inf many times
where f(integer)=integer, poly is in Q[x].)
You can also NOT include negative primes and see how that changes things.
Thursday, September 03, 2015
Whiplashed
I recently watched the movie Whiplash, about a college jazz band director, Fletcher played by J.K. Simmons, who torments his musicians to force them to be their best. The movie focuses on a drummer, Andrew, which makes for a great audio/video feast but in its essentials Whiplash is a story of a professor and his student.
I can imagine playing the role, “Do you think your proof is correct? Yes or No? Are you applying Toda’s theorem correctly or are you using the same crazy logic your dad used when he left your mom?” OK, maybe not.
Nevertheless Fletcher has a point. Too often I’m seeing graduate student doing just enough to get a paper into a conference instead of pushing themselves, trying to do great work and still not being satisfied. Fletcher says the two most dangerous words in the English language are “good job”. While that might be a little cruel, we do need to push our students and ourselves to take risks in research and be okay in failing. To roughly quote John Shedd and Grace Murray Hopper, "the safest place for a ship is in the harbor, but that’s not what ships are for."
Whiplash had a different kind of scene that definitely hit home. Andrew could not impress his family with the fact that he was lead drummer in the top college jazz band in the nation. I’ve been there, trying to get my mother excited by the fact that I had a STOC paper early in my career. "That's nice dear".
I can imagine playing the role, “Do you think your proof is correct? Yes or No? Are you applying Toda’s theorem correctly or are you using the same crazy logic your dad used when he left your mom?” OK, maybe not.
Nevertheless Fletcher has a point. Too often I’m seeing graduate student doing just enough to get a paper into a conference instead of pushing themselves, trying to do great work and still not being satisfied. Fletcher says the two most dangerous words in the English language are “good job”. While that might be a little cruel, we do need to push our students and ourselves to take risks in research and be okay in failing. To roughly quote John Shedd and Grace Murray Hopper, "the safest place for a ship is in the harbor, but that’s not what ships are for."
Whiplash had a different kind of scene that definitely hit home. Andrew could not impress his family with the fact that he was lead drummer in the top college jazz band in the nation. I’ve been there, trying to get my mother excited by the fact that I had a STOC paper early in my career. "That's nice dear".
Sunday, August 30, 2015
Two candidates that I want to see run for Democratic Nomination
It has been noted that while there are 17 Republican candidates for the nomination, of which 10 have been declared serious by FOX News via the first debate, there are far less democratic candidates for the nomination and only one has been declared serious by the powers that be. This may change if Biden runs.
I want to suggest two Democrats who I think should run. They have not made ANY moves in that direction, so it won't happen... until they see that this blog post endorsing them and they get inspired!
Rush Holt. Was a US Congressman from NJ (NOTE- earlier version of this post incorrectly had him as a US Senator from WV which his father was. Thanks to Dave MB comment for pointing that out.) He has a PhD in Physics. I want a president who wins a Nobel Prize in something OTHER THAN Peace (T. Roosevelt, Wilson, Carter, Obama have won it for peace, fictional Bartlett on West Wing won if for Economics). I can imagine one winning for Literature. But Physics- that would be awesome! While I doubt Dr. Holt will win one, it would be good to have someone as Prez who knows SOME science so he won't say stupid things about global warning. (Counter question- do the climate change deniers in the Senate really believe what they are saying or not?) He's also a Quaker, not sure how that plays into all of this. (I had a longer and incorrect passage here, but thanks to Andy P's comment below I changed it and it is now correct--- I hope.)
Would having a scientist in the Whitehouse be good for Science Funding? I would guess a marginal yes. Would it improve my chance of getting my REU grant renewed? I would guess no.
Sheldon Whitehouse. US Senator from Rhode Island. Look at his name- he was born to be Prez!
You may think these are stupid criteria. You may be right. But is it any better than I voted for X in the primary since X has a better chance of winning in the general (One of Clinton's arguments against Obama in 2008 was that Obama couldn't win since... well, you know) or he looks just like a prez (Warren Harding's main qualification) or he's Rich and obnoxious (I would say who I am thinking of, but he's been known to sue people).
I want to suggest two Democrats who I think should run. They have not made ANY moves in that direction, so it won't happen... until they see that this blog post endorsing them and they get inspired!
Rush Holt. Was a US Congressman from NJ (NOTE- earlier version of this post incorrectly had him as a US Senator from WV which his father was. Thanks to Dave MB comment for pointing that out.) He has a PhD in Physics. I want a president who wins a Nobel Prize in something OTHER THAN Peace (T. Roosevelt, Wilson, Carter, Obama have won it for peace, fictional Bartlett on West Wing won if for Economics). I can imagine one winning for Literature. But Physics- that would be awesome! While I doubt Dr. Holt will win one, it would be good to have someone as Prez who knows SOME science so he won't say stupid things about global warning. (Counter question- do the climate change deniers in the Senate really believe what they are saying or not?) He's also a Quaker, not sure how that plays into all of this. (I had a longer and incorrect passage here, but thanks to Andy P's comment below I changed it and it is now correct--- I hope.)
Would having a scientist in the Whitehouse be good for Science Funding? I would guess a marginal yes. Would it improve my chance of getting my REU grant renewed? I would guess no.
Sheldon Whitehouse. US Senator from Rhode Island. Look at his name- he was born to be Prez!
You may think these are stupid criteria. You may be right. But is it any better than I voted for X in the primary since X has a better chance of winning in the general (One of Clinton's arguments against Obama in 2008 was that Obama couldn't win since... well, you know) or he looks just like a prez (Warren Harding's main qualification) or he's Rich and obnoxious (I would say who I am thinking of, but he's been known to sue people).
Thursday, August 27, 2015
PACM
I serve on the conference committee of the ACM publications board and we've had extensive discussions on the question of the role of journals in publication venues. A number of CS conferences, though notably not in TCS, are moving to a hybrid publication model where their conference presentations make their way into refereed journal papers. One of our proposals is the creating of a specific venue for these activities, a new Proceedings of the ACM. In the September CACM, Joseph Konstan and Jack Davidson lay out this proposal, with pros and cons by Kathryn McKinley and David Rosenblum respectively. The community (that means you) is being asked to give their input.
The theory model has not significantly changed since I was a student. Papers submitted to a conference get reviewed but not refereed, the proofs read over usually just enough to feel confident that the theorem is likely correct. Once authors started submitting electronically they could submit entire proofs, though often in an appendix the program committee is not required to read.
The papers appear in a proceedings and to quote from the STOC proceedings preface
Should theory conferences move towards a more hybrid or PACM type of model? I'd had several debates with my fellow theorists many of whom feel the advantages of requiring journal-level papers get outweighed by the extra effort and time required by the authors and the reviewers.
The theory model has not significantly changed since I was a student. Papers submitted to a conference get reviewed but not refereed, the proofs read over usually just enough to feel confident that the theorem is likely correct. Once authors started submitting electronically they could submit entire proofs, though often in an appendix the program committee is not required to read.
The papers appear in a proceedings and to quote from the STOC proceedings preface
The submissions were not refereed, and many of these papers represent reports of continuing research. It is expected that most of them will appear in a more polished and complete form in scientific journals.A small select number of papers from a conference are invited to a special issue of a journal where they do go through the full referee process. Some, but not most, of the other papers get submitted to journals directly. We don't have the proper incentives for authors to produce a journal version with full and complete proofs.
Should theory conferences move towards a more hybrid or PACM type of model? I'd had several debates with my fellow theorists many of whom feel the advantages of requiring journal-level papers get outweighed by the extra effort and time required by the authors and the reviewers.
Sunday, August 23, 2015
Interesting properties of the number 24 on someone's 24th wedding anniversary
The story you are about to read is true. Only the names have been changed to protect the innocent. The Alice and Bob below are not the crypto Alice and Bob.
------------------------------------------------------
BOB (to ALICE): Its our 24th anniversary! Last year when it was our 23rd anniversary we celebrated by having you tell me that 23 was the ONLY number that required 9 cubes so sum to it, and that its open how many cubes you need for large n, though its between 4 and 7. Oh that was fun! What do you have planned for our 24th anniversary!
ALICE (to BOB): I've prepared FIVE facts about 24! Oh, I mean 24, not 24 factorial! We'll see which one you want to discuss. Here they are:
1) 24 is the largest nat number n such that all nat numbers m ≤ sqrt;(n) m divides n.
2) 24 is the least nat number that has exactly 8 distinct factors. (1,2,3,4,6,8,12,24)
3) 24 is the ONLY number m≥2 such that 1^2 + 2^2 + ... + m^2 is a square. Its 70^2, so if we are married 70 years, I'll have an interesting fact about 24.
4) Let S be an n-sphere. How many spheres of the same size as S can kiss S? Thats the kissing number, called kiss(n). kiss(2)=6 (so given a circle you can position 6 identical circles that kiss it), kiss(3)=12 (thats 3-dim), and kiss(4)=24.
5) Its one of the few numbers that is the title of a TV show.
BOB: Since its our anniversary I'll go with the kissing number~ Mathematically I'd go with the square thing.
---------------------------------------------------
I am sure that for all numbers ≤ 94 one can come up with some facts of interest. The book Those Fascinating Number has an interesting fact about many numbers. The least number that it has no interesting fact about is 95, but I suspect Alice and Bob won't be married that long.
What are the coolest numbers (mathematically)? See here for a possible answer.
Thursday, August 20, 2015
Crowdsourcing the Truth
A new project Augur aims to create a decentralized prediction market. If this post so moves you, Augur is in the midst of a reputation sale. Don't miss out if you would like to be an Augur reporter.
A prediction market takes a future event, such as Hillary Clinton winning the 2016 Presidential Election, and creates a security that pays off $100 if Hillary wins and $0 otherwise. The market allow buying, selling and short selling the security and the price of the security represents that probability the event will happen. Predictwise, which aggregates prediction markets, has the probability of Hillary winning at 47% as I write this. But there are a limited amount of markets out there for Predictwise to draw from.
Intrade, which shut down due to financial improprieties in 2013, used to run markets on all aspects of elections and other current events. Many other prediction markets have disappeared over time. The Augur team put out a white paper describing their fully decentralized prediction market immune to individuals bringing it down. They build on cryptocurrencies for buying and selling and a group of "reporters" financially incentivized to announce the correct answer for each market.
It's that last part I find the most interesting, instead of having an authority that reports the results, Augur will crowdsource the truth.
The purchasers of reputation tokens likely won't represent the public at large and biases may come to play. Would this consensus have agreed that Bush won the 2000 election?
What would have the consensus done on the North Korea controversy?
Despite my reservations, I'm quite excited to see Augur set up a prediction market system that can't be shut down and wish them all the luck. You have to until October 1 to buy reputation if you want to be deciding the truth instead of just predicting it.
A prediction market takes a future event, such as Hillary Clinton winning the 2016 Presidential Election, and creates a security that pays off $100 if Hillary wins and $0 otherwise. The market allow buying, selling and short selling the security and the price of the security represents that probability the event will happen. Predictwise, which aggregates prediction markets, has the probability of Hillary winning at 47% as I write this. But there are a limited amount of markets out there for Predictwise to draw from.
Intrade, which shut down due to financial improprieties in 2013, used to run markets on all aspects of elections and other current events. Many other prediction markets have disappeared over time. The Augur team put out a white paper describing their fully decentralized prediction market immune to individuals bringing it down. They build on cryptocurrencies for buying and selling and a group of "reporters" financially incentivized to announce the correct answer for each market.
It's that last part I find the most interesting, instead of having an authority that reports the results, Augur will crowdsource the truth.
A key feature of Augur is tradeable Reputation. The total amount of Reputation is a fixed quantity, determined upon the launch of Augur. Holding Reputation entitles its owner to report on the outcomes of events, after the events occur...Reputation tokens are gained and lost depending on how reliably their owner votes with the consensus.Consensus may not be "the truth". Reporters are not incentivized to report the truth but what the other reporters will report as the consensus. Those who buy and sell on the market are not betting on the truth but the outcome as it is decided by the consensus of reporters. We have an ungrounded market.
The purchasers of reputation tokens likely won't represent the public at large and biases may come to play. Would this consensus have agreed that Bush won the 2000 election?
What would have the consensus done on the North Korea controversy?
[The Intrade security] allowed speculation on whether North Korea would, by 31 July 2006, successfully fire ballistic missiles that would land outside its airspace. On 5 July 2006 the North Korean government claimed a successful test launch that would have satisfied the prediction, a launch widely reported by world media. [Intrade] declared that the contract's conditions had not been met, because the US Department of Defense had not confirmed the action, and this confirmation was specifically required by the contract. (Other government sources had confirmed the claim, but these were not the sources referenced in the contract.) Traders considered this to be in strict compliance with the stated rule but contrary to the intention of the market (which was to predict the launch event, and not whether the US Defense Department would confirm it).Since Augur will allow arbitrarily worded contracts on topics such as proofs of P v NP, the consensus reporting may lead to some interesting future blog posts.
Despite my reservations, I'm quite excited to see Augur set up a prediction market system that can't be shut down and wish them all the luck. You have to until October 1 to buy reputation if you want to be deciding the truth instead of just predicting it.
Sunday, August 16, 2015
Have we made Progress on P vs NP?
While teaching P vs NP in my class Elementary Theory of Computation (Finite Automata, CFG's, P-NP, Dec-undecid) I was asked What progress has been made on P vs NP?
I have heard respectable theorists answer this question in several ways:
1) There has been no progress whatsoever- but the problem is only 40 years old, a drop in the mathematical bucket sort.
2) There has been no progress whatsoever- this is terrible since 40 years of 20th and 21st century mathematics is a lot and we already had so much to draw on. We are for the long haul.
3) We have made progress on showing some techniques will not suffice, which is progress--- of a sort.
4) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. Too bad- since P NE NP we've made no progress.
5) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. We should not be closed minded to the possibliity that P=NP. (NOTE- other theorists say YES WE SHOULD BE CLOSED MINDED.)
6) Note that:
a) We have pathetic lower bounds on real models of computation.
b) We have Meaningful lower bounds on pathetic models of computation.
c) We DO NOT have meaningful lower bounds on real models of computation.
7) Have we made progress? Once the problem is solved we'll be able to look back and say what was progress.
I told the students my view: We have made progress on showing some techniques will not suffice, which is progress--- of a sort which my class found funny. Then again, students find the fact that there are sets that are undecidable, and sets even harder than that! to be funny too.
I have heard respectable theorists answer this question in several ways:
1) There has been no progress whatsoever- but the problem is only 40 years old, a drop in the mathematical bucket sort.
2) There has been no progress whatsoever- this is terrible since 40 years of 20th and 21st century mathematics is a lot and we already had so much to draw on. We are for the long haul.
3) We have made progress on showing some techniques will not suffice, which is progress--- of a sort.
4) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. Too bad- since P NE NP we've made no progress.
5) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. We should not be closed minded to the possibliity that P=NP. (NOTE- other theorists say YES WE SHOULD BE CLOSED MINDED.)
6) Note that:
a) We have pathetic lower bounds on real models of computation.
b) We have Meaningful lower bounds on pathetic models of computation.
c) We DO NOT have meaningful lower bounds on real models of computation.
7) Have we made progress? Once the problem is solved we'll be able to look back and say what was progress.
I told the students my view: We have made progress on showing some techniques will not suffice, which is progress--- of a sort which my class found funny. Then again, students find the fact that there are sets that are undecidable, and sets even harder than that! to be funny too.