(NOTE- this is NOT a `we hate Elsevier and the others' post- though I suspect the comments will be about that.)
Alexandra Elbakyan has created a repository of science papers for scientists to share. She did this because too many papers were behind paywalls. An article about her (and the inspiration for this post) is here. Note that she has had some legal problems.
But it got me thinking: I have rarely had a problem getting an article. Between authors websites and arXiv most of what I want is out there. Lance and others I've asked agree--- while there are problems with editors and publishes etc, access is NOT one of them.
Why don't other scientists just routinely do this?
some speculation but if someone actually knows please comment
1) Some scientists purposely don't post on line to avoid being scooped. Or the fear of that
2) Some scientists don't post for fear of legal issues with publishing. I do not know of a single computer scientist or mathematician that has gotten in any kind of trouble for posting a paper on their website or arXiv.
3) Some scientists are lazy.
4) (This contrasts with Comp Sci) other fields do things the way they do since they ALWAYS did it that way. Comp Sci is a younger field and hence is less tied to tradition.
5) Other scientists ARE posting their papers (gee, then why did Alexandra have to create the reporistory).
6) Alexandar is where arxiv was X years ago. They'll catch up. But they are having legal problems-- did arXiv ever have legal problems?
Looking over these I'm not sure I believe any of them. Any other ideas?
Again, the question is: Why don't other fields routinely post to something like arXiv or their own websites?
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, January 29, 2018
Friday, January 26, 2018
From Art to Science
Q: Why did the neural net cross the road?
A: Who cares along as it got to the other side.
Whit Diffie and Martin Hellman in their classic 1976 paper talk about the transition of cryptography:
For neural networks we still live in an art regime. We have great methods to minimize the error for a given network (though no formal proofs as such), one still needs to design the right network with the appropriate topology and various tricks like regularization to prevent overfitting. Several members of the theoretical CS community have taken on the task to turn the art into a science but getting us to where we sit in cryptography will take some time. Crytography took thousands of years to get from an art to science, hopefully we can get there faster for machine learning.
This debate came to a head at the recent NIPS conference. Ali Rahimi in his test-of-time talk described the current state of ML research as "alchemy". Yann LeCun gave a strong response
Proving theorems in mathematics is an art more than a science. Because of incompleteness in some sense it has to be. If mathematics ever became automated it would be a whole lot less fun.
A: Who cares along as it got to the other side.
Whit Diffie and Martin Hellman in their classic 1976 paper talk about the transition of cryptography:
Theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science.Indeed we have some very strong theoretical foundations for cryptography, though they still need to rely on unproven hardness assumptions. Often we trade off formal proven techniques for efficiency, in for example AES and Bitcoin. Here we use "science" in another sense: extensive testing of the hardness of these protocols. Creating and breaking cryptographic protocols used to be an art, having the right intuitions to make things work (for now). Today, breaking security rarely involves breaking a cryptographic protocol but rather due to bad implementations and easy-to-guess or phished passwords.
For neural networks we still live in an art regime. We have great methods to minimize the error for a given network (though no formal proofs as such), one still needs to design the right network with the appropriate topology and various tricks like regularization to prevent overfitting. Several members of the theoretical CS community have taken on the task to turn the art into a science but getting us to where we sit in cryptography will take some time. Crytography took thousands of years to get from an art to science, hopefully we can get there faster for machine learning.
This debate came to a head at the recent NIPS conference. Ali Rahimi in his test-of-time talk described the current state of ML research as "alchemy". Yann LeCun gave a strong response
Sticking to a set of methods just because you can do theory about it, while ignoring a set of methods that empirically work better just because you don't (yet) understand them theoretically is akin to looking for your lost car keys under the street light knowing you lost them someplace else.Imagine if we never did any cryptography until we had the science in place.
Proving theorems in mathematics is an art more than a science. Because of incompleteness in some sense it has to be. If mathematics ever became automated it would be a whole lot less fun.
Wednesday, January 24, 2018
How many degrees are in a Martian Year?
James Tanton gave a great talk at the JMM (Joint Math Meeting) in San Diego on
how many degrees are in a Martian Year?
but he didn't quite answer his title question. I found an excerpt of the talk on YouTube but still didn't quite answer the question, though he could have.
The talk was
How many degrees are in a Martian Year?
Here is an excerpt on You Tube: here
I feel the urge to explain and complete the talk.
Why do we earthlings have 360 degrees in a circle? Because there are 365.25 days in a year and nobody wants to work with that number so they round off. They could have chosen 360 or 365 or 370 but since 360 has the most divisors, it wins. (There is a different measure, gradians, of which there are 400 in a circle- also a nice round number, but 360 is rounder via the criteria we will soon explain.) So why pick 360? It has the most divisors.
There are 668 martian days in a martin year. So you might think they would pick 670 or 660 for there degreees. But thats Base 10 thinking! So the answer depends on what Base the Martians use. Lets assume that, like us, it depends on the number of fingers they have on their hands. We also assume they have two hands with the same number of fingers on them (would aliens look like us or not? Here are some thoughts: here and here). Two fingers on a hand seems like two few, and ten seems like too many so lets assume there base is either
3 fingers per hand: base 6. 668-base 10 is 3032 base 6. 3020 is answer- divided by 2,3,5
4 fingers per hand: base 8. 668-base 10 is 1234 base 8. 1240 is answer- divided by 2,3,4,8.
5 fingers per hand: base 10 668. Lets go with 660- divisible by 2,3,5
6 fingers per hand: base 12 668-base 10 is 478-base 12. 470 is answer- divided by 2,3,4,5
7 fingers per hand: base 14 668-base 10 is 35A in base 14. 140 is answer- leave it to you.
Any other good candidates or methods for this?
how many degrees are in a Martian Year?
but he didn't quite answer his title question. I found an excerpt of the talk on YouTube but still didn't quite answer the question, though he could have.
The talk was
How many degrees are in a Martian Year?
Here is an excerpt on You Tube: here
I feel the urge to explain and complete the talk.
Why do we earthlings have 360 degrees in a circle? Because there are 365.25 days in a year and nobody wants to work with that number so they round off. They could have chosen 360 or 365 or 370 but since 360 has the most divisors, it wins. (There is a different measure, gradians, of which there are 400 in a circle- also a nice round number, but 360 is rounder via the criteria we will soon explain.) So why pick 360? It has the most divisors.
There are 668 martian days in a martin year. So you might think they would pick 670 or 660 for there degreees. But thats Base 10 thinking! So the answer depends on what Base the Martians use. Lets assume that, like us, it depends on the number of fingers they have on their hands. We also assume they have two hands with the same number of fingers on them (would aliens look like us or not? Here are some thoughts: here and here). Two fingers on a hand seems like two few, and ten seems like too many so lets assume there base is either
3 fingers per hand: base 6. 668-base 10 is 3032 base 6. 3020 is answer- divided by 2,3,5
4 fingers per hand: base 8. 668-base 10 is 1234 base 8. 1240 is answer- divided by 2,3,4,8.
5 fingers per hand: base 10 668. Lets go with 660- divisible by 2,3,5
6 fingers per hand: base 12 668-base 10 is 478-base 12. 470 is answer- divided by 2,3,4,5
7 fingers per hand: base 14 668-base 10 is 35A in base 14. 140 is answer- leave it to you.
Any other good candidates or methods for this?
Thursday, January 18, 2018
A Sensitive Game
Last month I posted about the sensitivity conjecture and today I would like to talk about an interesting game developed by Gilmer, Koucký and Saks and independently by Drucker that could yield a proof.
Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.
If Carol can force Bob to give a set of nε bits for some ε>0, this would prove the sensitivity conjecture! Basically a family of functions that give a counterexample to the sensitivity conjecture would give a strategy for Alice and Bob that yields a set of size no(1). You can find the full proof in the papers above.
How well can Alice and Bob do? They can use the following strategy to achieve n1/2: Break up the input positions into n1/2 groups of n1/2 bits each. Whenever Carol picks a bit position, Alice sets that bit to zero unless it is the last bit in that group in which case she sets it to one. Carol can set the last bit in some group anyway she wants. Bob will either see every group having one 1 and will give the set of the positions of all the 1's, or Bob will see a group of all zeros and give the positions in that
group.
Mario Szegedy uses error-correcting codes to improve the upper bound to O(n0.4732). A Georgia Tech undergraduate DeVon Ingram improves the upper bound to O(n0.4696) by generalizing Szegedy's techniques. Ingram's proof comes down to finding a one-to-one function mapping subsets of {1,...,15} of size 4 to perfect codes of length 15 with the property that the bits of the code are 1 for the indices of the subset. Perhaps one could find a clever mapping that has this property but Ingram wrote code using a matching algorithm. A true computer assisted proof. Longer code lengths alas won't yield direct improvements.
The connection to sensitivity doesn't go both ways. An upper bound of no(1) wouldn't necessarily yield a counterexample to the sensitivity conjecture though would tell us the limitation of this game approach. Nevertheless I find it cool that a game that doesn't talk about functions at all could help settle an important question about them.
Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.
If Carol can force Bob to give a set of nε bits for some ε>0, this would prove the sensitivity conjecture! Basically a family of functions that give a counterexample to the sensitivity conjecture would give a strategy for Alice and Bob that yields a set of size no(1). You can find the full proof in the papers above.
How well can Alice and Bob do? They can use the following strategy to achieve n1/2: Break up the input positions into n1/2 groups of n1/2 bits each. Whenever Carol picks a bit position, Alice sets that bit to zero unless it is the last bit in that group in which case she sets it to one. Carol can set the last bit in some group anyway she wants. Bob will either see every group having one 1 and will give the set of the positions of all the 1's, or Bob will see a group of all zeros and give the positions in that
group.
Mario Szegedy uses error-correcting codes to improve the upper bound to O(n0.4732). A Georgia Tech undergraduate DeVon Ingram improves the upper bound to O(n0.4696) by generalizing Szegedy's techniques. Ingram's proof comes down to finding a one-to-one function mapping subsets of {1,...,15} of size 4 to perfect codes of length 15 with the property that the bits of the code are 1 for the indices of the subset. Perhaps one could find a clever mapping that has this property but Ingram wrote code using a matching algorithm. A true computer assisted proof. Longer code lengths alas won't yield direct improvements.
The connection to sensitivity doesn't go both ways. An upper bound of no(1) wouldn't necessarily yield a counterexample to the sensitivity conjecture though would tell us the limitation of this game approach. Nevertheless I find it cool that a game that doesn't talk about functions at all could help settle an important question about them.
Tuesday, January 16, 2018
Donald Knuth Turns 80 years and 6 days
Celebrating Donald Knuth's 80th birthday, or 80 years + 7 days birthday seems odd. Should we use powers of 2? Hmm- too few, just 32 and 64 really. And having a 32-year celebration for someone is unusual. How about numbers that end in 0 in base 8. 64 would be 100, 72 would 110, 80 would be 120 so AH- we would be celebrating! So lets Celebrate!
LANCE: One of us should blog about Don Knuth turning 80.
BILL: How about both of us, sep posts?
Lance had his post here. Now its my turn.
Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see here). But, for the most part everything he did was towards faster algorithm and faster typesetting.
In an early draft of a book review of Companion \to the Papers of Donald Knuth I wrote at the end of it
Donald Knuth has been called the father of Theoretical Computer Science and hence his thoughts are worth knowing
I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out
father of theoretical computer science
and replace it with
father of algorithmic analysis.
I made the correction. That is how he views himself and it would be odd to argue the point.
As a tribute to him I have gathered up all of the book reviews in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to here.
LANCE: One of us should blog about Don Knuth turning 80.
BILL: How about both of us, sep posts?
Lance had his post here. Now its my turn.
Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see here). But, for the most part everything he did was towards faster algorithm and faster typesetting.
In an early draft of a book review of Companion \to the Papers of Donald Knuth I wrote at the end of it
Donald Knuth has been called the father of Theoretical Computer Science and hence his thoughts are worth knowing
I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out
father of theoretical computer science
and replace it with
father of algorithmic analysis.
I made the correction. That is how he views himself and it would be odd to argue the point.
As a tribute to him I have gathered up all of the book reviews in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to here.
Wednesday, January 10, 2018
Donald Knuth Turns Eighty
We've kept this blog going long enough that we start repeating our celebrations. Ten years ago Bill celebrated Don Knuth's 70th Birthday and today Donald Knuth turns 80. While he celebrates in Piteå, Sweden with its less than 4.5 hours of daylight, we wish him a happy birthday from stateside.
Donald Knuth is known for many things, important algorithms, the brilliant book series The Art of Computer Programming, TeX, the many awards he's won and the award named after him. In my favorite story, back in the 70's when Knuth started working on Volume 4 (still in progress), he wanted a good name for those hard problems that Cook and Karp found. His candidates "Herculean", "Formidable" and "Arduous" didn't get much traction so he ran a contest which had some interesting suggestions before reluctantly going with the name "NP-complete" that came out of a group from MIT.
In his SIGACT News article that described the contest, Donald Knuth was so sure that these hard problems would remain hard, he puts everything on the line, offering anyone who could prove P = NP a live turkey. May Don Knuth and his turkey continue to celebrate many more birthdays.
Looking back in this blog, in 2015 I wrote about the history of the history of computing including this wonderful talk by Knuth, Let's Not Dumb Down the History of Computer Science.
Donald Knuth is known for many things, important algorithms, the brilliant book series The Art of Computer Programming, TeX, the many awards he's won and the award named after him. In my favorite story, back in the 70's when Knuth started working on Volume 4 (still in progress), he wanted a good name for those hard problems that Cook and Karp found. His candidates "Herculean", "Formidable" and "Arduous" didn't get much traction so he ran a contest which had some interesting suggestions before reluctantly going with the name "NP-complete" that came out of a group from MIT.
In his SIGACT News article that described the contest, Donald Knuth was so sure that these hard problems would remain hard, he puts everything on the line, offering anyone who could prove P = NP a live turkey. May Don Knuth and his turkey continue to celebrate many more birthdays.
Sunday, January 07, 2018
A new largest prime found!
A new largest KNOWN prime has been discovered and its 23 million digits long.
Nate Silver's website had an article about it (written by Oliver Roeder) here
An article about why people do this is here
Lance posted about finding large primes in 2006 here
I'll just make some random comments
1) The prime is 277,232,917-1
2) The prime is not much bigger than the previous champion.
3) More generally, the graph (in Oliver Roeder's article) shows from 1600 to about 1951there was slow progress but since then there has been LOTS of progress. See the table in this article. I had wanted to say every year a new prime was found but, alas, not that simple a pattern. Even so, lots of new records.
4) I"ll list the reasons given for why people do this and my comments on them.
a) Tradition! (Note to self- write a novelty song to the tune of Fiddler-on-the-roof's Tradition about why people work on various mathematical things)
b) For the by-product of the quest. This one does make sense and I am sure has driven and helped test out some research. Reminds me of the spinoffs from going to the moon (see here). Contrary to what I heard as a kid, the orange-powder-drink Tang was not a spinoff. But there were many others. Of course- would these spinoffs have happened anyway? Hard to know.
c) People collect rare and Beautiful items. I don't really see this one. Finding a large prime doesn't make its yours, it belongs to the ages! And I don't think people get their names on the prime either. The only prime that has someone's name on it is the famous Grothendieck prime which is 57. Oh well. There are sets of primes with peoples names on them: Mersenne primes, Gaussian primes (which are subsets of Gaussian integers so maybe shouldn't count), Eisenstein primes, and
Sophie Germain primes. If you know of any other primes or set of primes named after someone, leave a comment please.
d) For the glory! Maybe, but given how briefly people hold the record, fame is fleeting.
e) To test the hardware. This one I didn't know! I'll quote the article as to why primes are good for this
g) For the money. The first person to get a ten-million digit prime gets $100,000. The first person to get a one billion digit prime gets $250,000. Wow! Except that the article must be a bit old since the $100,000 prize was claimed in 2009 (see here). Still, theres that one billion digit prize out there!
5) Mersenne primes are of the form 2^n-1. It is known that n must be prime for 2^n-1 to be prime (this is not hard). There are much faster primality testing algorithms for Mersenne primes than arb primes. But see next item.
6) While writing this blog post I looked up non-mersenne primes. It seems like the largest one is
10223*2^31172165 + 1 and was discovered in 2016.
But of more interest- there is no Wikipedia page on non-Mersenne primes, there are some outdated pages that don't have the most recent information. As the kids say, its not a thing.
8) I'll add one more reason why people work on this, but its more of a tautology: People work on finding large primes because they can!. By contrast, finding VDW numbers is hard and likely to not make much progress.
9) I think that the most reason advances have come from computing power and not number theory. (if this is incorrect let me know with a polite comment)
10) Have their been spinoffs in either number theory OR computing power lately?
11) I wonder if there will come a point where progress gets hard again and the graph of largest known primes flattens out. I tend to think yes, but hard to say when.
Nate Silver's website had an article about it (written by Oliver Roeder) here
An article about why people do this is here
Lance posted about finding large primes in 2006 here
I'll just make some random comments
1) The prime is 277,232,917-1
2) The prime is not much bigger than the previous champion.
3) More generally, the graph (in Oliver Roeder's article) shows from 1600 to about 1951there was slow progress but since then there has been LOTS of progress. See the table in this article. I had wanted to say every year a new prime was found but, alas, not that simple a pattern. Even so, lots of new records.
4) I"ll list the reasons given for why people do this and my comments on them.
a) Tradition! (Note to self- write a novelty song to the tune of Fiddler-on-the-roof's Tradition about why people work on various mathematical things)
b) For the by-product of the quest. This one does make sense and I am sure has driven and helped test out some research. Reminds me of the spinoffs from going to the moon (see here). Contrary to what I heard as a kid, the orange-powder-drink Tang was not a spinoff. But there were many others. Of course- would these spinoffs have happened anyway? Hard to know.
c) People collect rare and Beautiful items. I don't really see this one. Finding a large prime doesn't make its yours, it belongs to the ages! And I don't think people get their names on the prime either. The only prime that has someone's name on it is the famous Grothendieck prime which is 57. Oh well. There are sets of primes with peoples names on them: Mersenne primes, Gaussian primes (which are subsets of Gaussian integers so maybe shouldn't count), Eisenstein primes, and
Sophie Germain primes. If you know of any other primes or set of primes named after someone, leave a comment please.
d) For the glory! Maybe, but given how briefly people hold the record, fame is fleeting.
e) To test the hardware. This one I didn't know! I'll quote the article as to why primes are good for this
Why are prime programs used this way? They are intensely CPU and bus bound. They are relatively short, give an easily checked answer (when run on a known prime they should output true after their billions of calculations). They can easily be run in the background while other "more important" tasks run, and they are usually easy to stop and restart.f) To learn more about their distribution. The prime number theorem was conjectured from data. We have so many primes now that I wonder if a few more really help formulate conjectures
g) For the money. The first person to get a ten-million digit prime gets $100,000. The first person to get a one billion digit prime gets $250,000. Wow! Except that the article must be a bit old since the $100,000 prize was claimed in 2009 (see here). Still, theres that one billion digit prize out there!
5) Mersenne primes are of the form 2^n-1. It is known that n must be prime for 2^n-1 to be prime (this is not hard). There are much faster primality testing algorithms for Mersenne primes than arb primes. But see next item.
6) While writing this blog post I looked up non-mersenne primes. It seems like the largest one is
10223*2^31172165 + 1 and was discovered in 2016.
But of more interest- there is no Wikipedia page on non-Mersenne primes, there are some outdated pages that don't have the most recent information. As the kids say, its not a thing.
8) I'll add one more reason why people work on this, but its more of a tautology: People work on finding large primes because they can!. By contrast, finding VDW numbers is hard and likely to not make much progress.
9) I think that the most reason advances have come from computing power and not number theory. (if this is incorrect let me know with a polite comment)
10) Have their been spinoffs in either number theory OR computing power lately?
11) I wonder if there will come a point where progress gets hard again and the graph of largest known primes flattens out. I tend to think yes, but hard to say when.
Friday, January 05, 2018
Which of these Math acronyms are well known?
The last time I taught Grad Ramsey Theory there were very good math grads and ugrads in it. They used some acronyms - some I knew, some I didn't know (but know now). I am sure some are well known and some are now. I don't know which is which. Here is the list and comments
WLOG- Without Loss of Generality. This one I know and it seems well know-- When Googled the first page is all this definition. (Maybe I shouldn't use the term ``Googled''- I've heard that brand names don't like it when they become generic terms like `Googled'. Kleenex, Photoshop, Xerox had this fate. Their is a word for it- genericide)
ITOT- It turns out that. An Applied Math Prof used this in a course I had in 1977. I have not seen it used since then. I don't even use it.
BWOC- By Way of Contradiction. I thought this was better known. When I google it I got to this page which tells me it stands for:
Big Wolf on Campus (TV Show)
By Weight of Cement (Oil and Gas industry)
Big Women on Campus (GOOD- the counterpart to BMOC)
By Way of Contradiction (Given the other items on the list I'm surprised it made the cut)
Bob Wayne's Oil company
FTSOC- For the Sake of Contradiction. NOT, as Google told me, Fuck this Shit O'Clock (reminds me of when I use FML for Formula and the class tells me its means Fuck My Life)
WTS- This was a new one for me. It took a while to get it from context but it was Want to Show. Google gives Women in Transportation.
NTS- Need to show. Also a radio station and the National Traffic System.
I think the only one of these that is standard is WLOG. The rest I think could be useful. But I ask you- are any of these standard? Are there ones that are standard that I missed? Of course, the great thing about standards is that there are so many of them.