## Tuesday, July 16, 2019

### Guest post by Samir Khuller on attending The TCS Women 2019 meeting

(I will post the solution to the problem in the last blog later in the week---probably Thursday. Meanwhile, enjoy these thoughts from Samir Khuller on the TCS Women 2019 meeting.)

Guest Post by Samir Khuller:

Am I even allowed here?” was the first thought that crossed my mind when I entered the room. It was packed with women (over 95%), however a few minutes later, several men had trickled in. I was at the TCS Women spotlight workshop on the day before STOC. Kudos to Barna Saha, Sofya Raskhodnikova, and Virginia Vassilevska Williams for putting this grand (and long needed) event together, which serves as a role model and showcases some of the recent work by rising stars. In addition to the Sun afternoon workshop, the event was followed by both an all women panel and a poster session (which I sadly did not attend).

The rising stars talks were given by Naama Ben-David (CMU), Andrea Lincoln (MIT), Debarati Das (Charles University) and Oxana Poburinnaya (Boston U). After a short break the inspirational talk was by Ronitt Rubinfeld from MIT. Ronitt’s talk was on the topic of Program Checking, but she made it inspirational by putting us in her shoes as a young graduate student, three decades back, trying to make a dent in research by working on something that her advisor Manuel Blum, and his senior graduate student Sampath Kannan had been working on, and I must say she made a pretty big dent in the process! She also related those ideas to other pieces of work done since in a really elegant manner and how these pieces of work lead to work on property testing.

I am delighted to say that NSF supported the workshop along with companies such as Amazon, Akamai, Google and Microsoft. SIGACT plans to be a major sponsor next year.

The Full program for the workshop is at the following URLhere.

## Sunday, July 14, 2019

### Two infinite hat problem and a question about what is ``well known''

This is a joint post with David Marcus. You will see how he is involved in my next post.

Two infinite hat problems based on one scenario. I am also curious if they are well known.

1) There are an infinite number of people, numbered 1,2,3,... There are 2 colors of hats. They can all see everyone's hat but their own.

2) The adversary is going to put hats on all the people. They will guess their own hat color

*at the same time*.

3) The people can discuss strategy ahead of time, but must use a deterministic strategy and the adversary knows the strategy.

4) The people want to minimize how many they get wrong.

5) The adversary puts on hats to maximize how many they get wrong.

I ask two questions and one meta-question:

Q1: Is there a solution where they get all but a finite number of the guesses right? (I have blogged about a variant of this one a while back.)

Q2: Is there a solution where they get all but at most (say) 18 wrong. (My students would say

*the answer has to be YES or he*

*wouldn't ask it*. They don't realize that I work on upper AND lower bounds!)

Q3: How well known is problem Q1 and the solution? Q2 and the solution? I've seen Q1 and its solution around (not sure where), but the only source on Q2 that I know of is CAN'T TELL YOU IN THIS POST, WILL IN THE NEXT POST. So, please leave a comment telling me if you have seen Q1 or Q2 and solutions. And if so then where.

Feel free to leave any comments you want; however, I warn readers who want to solve it themselves to not look at the comments, or at my next post.

## Thursday, July 11, 2019

### Degree and Sensitivity

Hao Huang's proof of the sensitivity conjecture that I posted on last week relied on a 1992 result of Gotsman and Linial. Let's talk about that result.

Consider the set S={-1,1}

We say a vertex x is odd if x has an odd number of -1 coordinates, even otherwise. Every edge joins an odd and even vertex.

Let f be a function mapping S to {-1,1}. The sensitivity of f on x is the number of i such that f(x

Let g be the same function as f except that we flip the value on all odd vertices. Notice now that the sensitivity of f on x is the number of i such that g(x

Let G be the induced subgraph of vertices of x such that g(x)=-1 and H be induced subgraph on the set of x such that g(x)=1. The sensitivity of f is the maximum number of neighbors of any vertex in G or H.

Consider f as a multilinear polynomial over the reals. The sensitivity conjecture states there is some α>0 such that if f has degree n then f has sensitivity at least n

Note g(x

GL Assumption: Suppose you have a partition of the hypercube into sets A and B with |A| ≠ |B|, and let G and H be the induced subgraphs of A and B. Then there is some constant α>0 such that there is a node of A or B with at least n

The above argument, due to Gotsman and Linial, shows that the GL assumption is equivalent to the sensitivity conjecture.

Huang proved that given any subset A of the vertices of a hypercube with |A|>2

Consider the set S={-1,1}

^{n}. The hypercube of dimension n is the graph with vertex set S and an edge between x = (x_{1},…,x_{n}) and y = (y_{1},…,y_{n}) in S if there is exactly one i such that x_{i}≠ y_{i}. Every vertex has degree n.We say a vertex x is odd if x has an odd number of -1 coordinates, even otherwise. Every edge joins an odd and even vertex.

Let f be a function mapping S to {-1,1}. The sensitivity of f on x is the number of i such that f(x

_{1},…,x_{i},…,x_{n}) ≠ f(x_{1},…,-x_{i},…,x_{n}). The sensitivity of f is the maximum over all x in S of the sensitivity of f on x.Let g be the same function as f except that we flip the value on all odd vertices. Notice now that the sensitivity of f on x is the number of i such that g(x

_{1},…,x_{i},…,x_{n}) = g(x_{1},…,-x_{i},…,x_{n}).Let G be the induced subgraph of vertices of x such that g(x)=-1 and H be induced subgraph on the set of x such that g(x)=1. The sensitivity of f is the maximum number of neighbors of any vertex in G or H.

Consider f as a multilinear polynomial over the reals. The sensitivity conjecture states there is some α>0 such that if f has degree n then f has sensitivity at least n

^{α}.Note g(x

_{1},…,x_{n})=f(x_{1},…,x_{n})x_{1}⋯x_{n}. If f has a degree n term, the variables in that term cancel out on S (since x_{i}^{2}=1) and the constant of the degree n term of f becomes the constant term of g. The constant term is just the expected value, so f has full degree iff g is unbalanced.GL Assumption: Suppose you have a partition of the hypercube into sets A and B with |A| ≠ |B|, and let G and H be the induced subgraphs of A and B. Then there is some constant α>0 such that there is a node of A or B with at least n

^{α}neighbors.The above argument, due to Gotsman and Linial, shows that the GL assumption is equivalent to the sensitivity conjecture.

Huang proved that given any subset A of the vertices of a hypercube with |A|>2

^{n}/2 the induced subgraph has a node of degree at least n^{1/2}. Since either A or B in the GL assumption has size greater than 2^{n}/2, Huang's result gives the sensitivity conjecture.## Sunday, July 07, 2019

### Fortran is underated!

(Joint Post with David Marcus who was a classmate of mine at SUNY Stony Brook [now called Stony Brook University]. I was class of 1980, he was class of 1979. We were both math majors.)

David has been reading Problems with a POINT (I'm glad someone is reading it) and emailed me a comment on the following passage which was essentially this post. I paraphrase what I wrote:

PASSAGE IN BOOK:

I dusted off my book shelves and found a book on Fortran. On the back it said:

FORTRAN is one of the oldest high-level languages and remains the premier language for writing code for science and engineering applications. (NOTE- The back of the book uses Fortran but the spell checker I am using insists on FORTRAN. As a fan of capital letters, I don't mind going along.)

When was the book written?

The answer was surprising in that it was 2012 (the Chapter title was

END OF PASSAGE IN BOOK

David Marcus emailed me the following:

DAVID'S EMAIL

Page 201. Fortran. One clue is that it said "Fortran" rather than"FORTRAN". Fortran 90 changed the name from all upper case. Whether it is the "premier language" depends on what you mean by "premier". It is probably the best language for scientific computing. I used it pretty much exclusively (by choice) in my previous job that I left in 2006. The handling of arrays is better than any other language I've used. Maybe there are some better languages that I'm not familiar with, but the huge number of high-quality scientific libraries available for Fortran makes it hard to beat. On the other hand, I never wrote a GUI app with it (Delphi is best for that).

END OF DAVID'S EMAIL

In later emails we agreed that Fortran is not used that much (there are lists of most-used languages and neither Fortran nor FORTRAN is ever in the top 10). But what intrigued me was the following contrast:

1) David says that its the BEST language for Scientific Computing. I will assume he is right.

2) I doubt much NEW code is being written in it. I will assume I am right.

So---what's up with that? Some options

OPTION 1) People SHOULD use Fortran but DON'T. If so, why is that? Fortran is not taught in schools. People are used to what they already know. Perhaps people who do pick up new things easily and want to use new things would rather use NEW things rather than NEW-TO-THEM-BUT-NOT-TO-THEIR-GRANDMOTHER things. Could be a coolness factor. Do the advantages of Fortran outweight the disadvantages? Is what they are using good enough?

OPTION 2) The amount of Scientific computing software being written is small since we already have these great Fortran packages. So it may be a victim of its own success.

CAVEAT: When I emailed David a first draft of the post he pointed out the following which has to do with the lists of most-used programming languages:

DAVIDS EMAIL:

The problem with the lists you were looking at is that most people in the world are not scientists, so most software being written is not for scientists. Scientists and technical people are writing lots of new code. If you look at a list of scientific languages, you will see Fortran, e.g., here and here.

There are several Fortran compilers available. One of the best was bought by Intel some time back and they still sell it. I doubt they would do that if no one was using it. Actually, I think Intel had a compiler, but bought the Compaq compiler (which used to be the Digital Equipment compiler) and merged the Compaq team with their team. Something like that. I was using the Compaq compiler around that time.

END OF DAVID's EMAIL

One quote from the second pointer I find intriguing. (Second use of the word

I (Bill) don't know what some of that means; however, it does mean that Fortran is still active.

One fear: with its not being taught that much, will knowledge of it die out. We be like Star Trek aliens:

David has been reading Problems with a POINT (I'm glad someone is reading it) and emailed me a comment on the following passage which was essentially this post. I paraphrase what I wrote:

PASSAGE IN BOOK:

I dusted off my book shelves and found a book on Fortran. On the back it said:

FORTRAN is one of the oldest high-level languages and remains the premier language for writing code for science and engineering applications. (NOTE- The back of the book uses Fortran but the spell checker I am using insists on FORTRAN. As a fan of capital letters, I don't mind going along.)

When was the book written?

The answer was surprising in that it was 2012 (the Chapter title was

*Trick Question or Stupid Question*. This was a Trick Question.) I would have thought that FORTRAN was no longer the premier language by then. I also need to dust my bookshelves more often.END OF PASSAGE IN BOOK

David Marcus emailed me the following:

DAVID'S EMAIL

Page 201. Fortran. One clue is that it said "Fortran" rather than"FORTRAN". Fortran 90 changed the name from all upper case. Whether it is the "premier language" depends on what you mean by "premier". It is probably the best language for scientific computing. I used it pretty much exclusively (by choice) in my previous job that I left in 2006. The handling of arrays is better than any other language I've used. Maybe there are some better languages that I'm not familiar with, but the huge number of high-quality scientific libraries available for Fortran makes it hard to beat. On the other hand, I never wrote a GUI app with it (Delphi is best for that).

END OF DAVID'S EMAIL

In later emails we agreed that Fortran is not used that much (there are lists of most-used languages and neither Fortran nor FORTRAN is ever in the top 10). But what intrigued me was the following contrast:

1) David says that its the BEST language for Scientific Computing. I will assume he is right.

2) I doubt much NEW code is being written in it. I will assume I am right.

So---what's up with that? Some options

OPTION 1) People SHOULD use Fortran but DON'T. If so, why is that? Fortran is not taught in schools. People are used to what they already know. Perhaps people who do pick up new things easily and want to use new things would rather use NEW things rather than NEW-TO-THEM-BUT-NOT-TO-THEIR-GRANDMOTHER things. Could be a coolness factor. Do the advantages of Fortran outweight the disadvantages? Is what they are using good enough?

OPTION 2) The amount of Scientific computing software being written is small since we already have these great Fortran packages. So it may be a victim of its own success.

CAVEAT: When I emailed David a first draft of the post he pointed out the following which has to do with the lists of most-used programming languages:

DAVIDS EMAIL:

The problem with the lists you were looking at is that most people in the world are not scientists, so most software being written is not for scientists. Scientists and technical people are writing lots of new code. If you look at a list of scientific languages, you will see Fortran, e.g., here and here.

There are several Fortran compilers available. One of the best was bought by Intel some time back and they still sell it. I doubt they would do that if no one was using it. Actually, I think Intel had a compiler, but bought the Compaq compiler (which used to be the Digital Equipment compiler) and merged the Compaq team with their team. Something like that. I was using the Compaq compiler around that time.

END OF DAVID's EMAIL

One quote from the second pointer I find intriguing. (Second use of the word

*intriguing*. It was my word-of-the-day on my word-calendar).*... facilities for inter-operation with C were added to Fortran 2003 and enhanced by ISO/ICE technical specification 29113, which will be incorporated into Fortran 2018.*I (Bill) don't know what some of that means; however, it does mean that Fortran is still active.

One fear: with its not being taught that much, will knowledge of it die out. We be like Star Trek aliens:

*The old ones built these machines, but then died and we can't fix them!*

## Tuesday, July 02, 2019

### Local Kid Makes History

Huang, an assistant professor in the math department at Emory, settled an open question about the hypercube. The hypercube is a graph on N=2

^{n}vertices where each vertex corresponds to an n-bit string and their are edges between vertices corresponding to strings that differ in a single bit. Think of the set of the strings of odd parity, N/2 vertices with no edges between them. Add any other vertex and it would have n neighbors. Huang showed that no matter how you placed those N/2+1 vertices in the hypercube, some vertex will have at least n

^{1/2}neighbors. By an old result of Gotsman and Linial, Huang's theorem implies the sensitivity conjecture.

I won't go through the shockingly simple proof, the paper is well written, or you can read the blogs I linked to above or even just Ryan O'Donnell's tweet.

I have nothing more to say than wow, just wow.

## Sunday, June 30, 2019

### A proof that 22/7 - pi > 0 and more

My father was a High School English teacher who did not know much math. As I was going off to college, intending to major in math, he gave me the following sage advice:

1)

*Take Physics as well as Math since Physics and Math go well together.*This was good advice. I took the first year of Physics for Physics Majors, and I later took a senior course in Mechanics since that was my favorite part of the first year course. Kudos to Dad!

2) π

*is exactly*22/7. I knew this was not true, but I also knew that I had no easy way to show him this. In fact, I wonder if I could have proven it myself back then.

I had not thought about this in many years when I came across the following:

Problem A-1 on the 1968 Putnam exam:

Prove 22/7 - π = ∫

_{0}

^{1}(x

^{4}(1-x)

^{4})/(1+ x

^{2})dx

(I can easily do his by partial fractions and remembering that ∫ 1/(1+x^2) dx = tan

^{-1}x which is tan inverse, not 1/tan. See here.)

(ADDED LATER---I have added conjectures on getting integrals of the form above except with 4 replaced by any natural number. Be the first on your block to solve my conjectures! It has to be easier than the Sensitivity Conjecture!)

Let n ∈ N which we will choose later. By looking at the circle that is inscribed in a regular n-polygon (n even) one finds that

n tan(π/n) > π

So we seek an even value of n such that

n tan(π/n) < 22/7

Using Wolfram alpha the smallest such n is 92.

Would that convince Dad? Would he understand it? Probably not. Oh well.

Some misc points.

1) While working on this post I originally wanted to find tan(π/2

^{7}) by using the half-angle formula many times, and get an exact answer in terms of radicals, rather than using Wolfram Alpha.

a) While I have lots of combinatorics books, theory of comp books, and more Ramsey Theory books than one person should own in my house, I didn't have a SINGLE book with any trig in it.

b) I easily found it on the web:

tan(x/2) = sqrt( (1-cos x)/(1+cos x) ) = sin x/(1+cos x) = (1-cos x)/(sin x).

None of these seems like it would get me a nice expression for tan(π/2

^{7}). But I don't know. Is there a nice expression for tan(π/2

^{k}) ? If you know of one then leave a polite comment.

2) I assumed that there was a more clever and faster way to do the integral. I could not find old Putnam exams and their solutions the web (I'm sure they are there someplace! --- if you know then comment politely with a pointer). So I got a book out of the library

*The William Lowell Putnam Mathematical Competition Problems and Solutions 1965--1984*by Alexanderson, Klosinski, and Larson. Here is the clever solution:

*The standard approach from Elementary Calculus applies.*

Not as clever as I as hoping for.

3) I also looked at the integral with 4 replaced by 1,2,3,4,...,16. The results are in the writeup I pointed to before. It looks like I can use this sequence to get upper and lower bound on pi, ln(2), pi+2ln(2), and pi-2ln(2). I have not proven any of this. But take a look! And as noted above I have conjectures!

4) When I looked up INSCRIBING a circle in a regular n-polygon, Google kept giving me CIRCUMSCRIBING. Why? I do not know but I can speculate. Archimedes had a very nice way of using circumscribed circles to approximate pi. Its on youtube here. Hence people are used to using circumscribed rather than inscribed circles.

## Thursday, June 27, 2019

### FCRC 2019

Georgia Tech FCRC Participants |

Geoff Hinton and Yann LeCun gave their Turing award lectures, their co-winner Yoshua Bengio not in attendance. Hinton talked about how machine learning triumphed over symbolic AI. LeCun argued that under uncertainty, one should learn the distribution instead of just the average. If you want more, just watch it yourself.

To get to the STOC lectures you go up and down escalators and pass through ISCA (Computer Architecture) and PLDI (Programming Languages). It's like you are going up the computing stack until you reach algorithms and complexity.

The conference center was just two blocks from Chase Field so we could take in a Diamondbacks baseball game. They opened the roof because the temperature dropped into the double digits. Last night, Paul McCartney played at an arena just a block from the conference center, but instead I hung out at an Uber reception for the EC conference.

Let me mention a best paper awardee, The Reachability Problem for Petri Nets is Not Elementary by Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jerome Leroux and Filip Mazowiecki. In a Petri net you have a list of vectors of integers and an initial and final vector. You start with the initial vector and can add any of the other vectors nondeterministically as often as you like as long as no coordinate goes negative. Can you get to the final vector? This problem was known to be computable in "Ackermannian" time and EXPSPACE-hard. This paper shows the problem is not elementary, i.e. not solvable in running time a tower of 2's to the n. A recent result shows Petri Nets reachability is primitive recursive for fixed dimensions.

Avi Wigderson gave the Knuth Prize lecture exploring deep connections between mathematics and algorithms. Hopefully the video will be online soon.

STOC next year in Chicago, EC as part of the Game Theory Congress in Budapest.

Avi Wigderson gave the Knuth Prize lecture exploring deep connections between mathematics and algorithms. Hopefully the video will be online soon.

STOC next year in Chicago, EC as part of the Game Theory Congress in Budapest.

## Monday, June 24, 2019

### Are you smarter than a 5th grade amoeba?

(title of this blog is due to Henry Baker who posted an article about this elsewhere)

Amoeba finds approx solution to TSP in linear time:here.

Over the years we have seen models of computation that claim to solve NPC or other hard problems quickly. I ask non-rhetorically and with and open mind how they have panned out.

In no particular order:

1) Parallelism. For solving problems faster YES. For speeding up how to solve NPC problems I think YES. For making P=NP somehow NO. Even so, parallel computers have been a definite practical success.

2) Quantum Computing. Will they factor large numbers anytime soon? Ever? Should we view the effort to build them as an awesome and insightful Physics experiment? Are there any problems that they are NOW doing faster? Is Quantum Crypto (I know, not the same thing) actually used? Will other things of interest come out of the study of quantum computing? It already has, see here.

3) DNA computing. Did that lead to practical solutions to NPC problems? I do not think it did. Did that lead to interesting math? Interesting biology? Interesting science? I do not know.

4) Autistic computing for finding primes: see here. Oliver Sacks, the neurologist ,claimed that two autistic twin brothers could generate large primes quickly. This story was never put to a rigorous test and may not be quite right.

5) Amoeba computing: Too early to tell. The article seems to say it succeeded on 8 cities

The problem with all of these non-standard models of computing is SCALE. And the more powerful classic computers get, the harder it is for these nonstandard models to compete.

Are these models interesting even if they don't end up getting us fast algorithms? They can be:

1) Do they lead to mathematics of interest? (Quantum- Yes, Parallelism- Yes)

2) Did they inspire algorithms for classical computers? (Quantum- Yes)

3) Do they give insight into other fields? (Quantum for Physics yes, DNA-computing for bio-??)

4) Have they ACTUALLY sped up up computations in meaningful ways for problems we care about (Parallelism has)

If you know of any result which I missed

(e.g.,

Amoeba-computing giving insight into evolution,

Autistic computing being used by the NSA to find primes,

DNA computing leading to interesting mathematics)

then leave polite comments!

Amoeba finds approx solution to TSP in linear time:here.

Over the years we have seen models of computation that claim to solve NPC or other hard problems quickly. I ask non-rhetorically and with and open mind how they have panned out.

In no particular order:

1) Parallelism. For solving problems faster YES. For speeding up how to solve NPC problems I think YES. For making P=NP somehow NO. Even so, parallel computers have been a definite practical success.

2) Quantum Computing. Will they factor large numbers anytime soon? Ever? Should we view the effort to build them as an awesome and insightful Physics experiment? Are there any problems that they are NOW doing faster? Is Quantum Crypto (I know, not the same thing) actually used? Will other things of interest come out of the study of quantum computing? It already has, see here.

3) DNA computing. Did that lead to practical solutions to NPC problems? I do not think it did. Did that lead to interesting math? Interesting biology? Interesting science? I do not know.

4) Autistic computing for finding primes: see here. Oliver Sacks, the neurologist ,claimed that two autistic twin brothers could generate large primes quickly. This story was never put to a rigorous test and may not be quite right.

5) Amoeba computing: Too early to tell. The article seems to say it succeeded on 8 cities

The problem with all of these non-standard models of computing is SCALE. And the more powerful classic computers get, the harder it is for these nonstandard models to compete.

Are these models interesting even if they don't end up getting us fast algorithms? They can be:

1) Do they lead to mathematics of interest? (Quantum- Yes, Parallelism- Yes)

2) Did they inspire algorithms for classical computers? (Quantum- Yes)

3) Do they give insight into other fields? (Quantum for Physics yes, DNA-computing for bio-??)

4) Have they ACTUALLY sped up up computations in meaningful ways for problems we care about (Parallelism has)

If you know of any result which I missed

(e.g.,

Amoeba-computing giving insight into evolution,

Autistic computing being used by the NSA to find primes,

DNA computing leading to interesting mathematics)

then leave polite comments!

## Thursday, June 20, 2019

### The Urban/Rural Collegiality Divide

Just a reminder that Grigory Yaroslavtsev has taken over the Theory Jobs Spreadsheet. Check out who is moving where and let everyone know where you will continue your research career.

Recently I served on a review committee for a computer science department in a rural college town. You couldn't help but notice the great collegiality among the faculty. Someone told me their theory that you generally get more faculty collegiality in rural versus urban locations. Why?

In urban locations faculty tend to live further from campus, to get bigger homes and better schools, and face more traffic. They are likely to have more connections with people unconnected with the university. There are more consulting opportunities in large cities, a larger variety of coffee houses to hang out in and better connected airports make leaving town easier. Particularly in computer science, where you can do most of your research remotely, faculty will find themselves less likely to come in every day and you lose those constant informal connections with the rest of the faculty.

This is a recent phenomenon, even going back to when I was a young faculty you needed to come to the office for access to research papers, better computers to write papers, good printers. Interactions with students and colleagues is always better in person but in the past the methods of electronic communication proved more limiting.

The University of Chicago helped create and promote its own neighborhood and ran a very strong private school with reduced tuition for faculty children. Maybe my life would have been different had I immersed myself in that lifestyle.

Or maybe we should go the other extreme. If we can find great ways to do on-line meetings and teaching, why do we need the physical university at all?

## Monday, June 17, 2019

### Why does the Nevalina Prize (now Abacus) got to Algorithms/Complexity people

In my post about the Nevanlinna prize name change (see here) one of my readers raised a different question about the prize:

BEGIN QUOTE

So there's one of my main questions about the prize answered (or at least resolved). The second remains. The IMU's website(which still refers to the Nevanlinna Prize) says that it is awarded "for outstanding contributions in Mathematical Aspects of Information Sciences including:"

1)All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.

2)Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.

Correct me if I'm wrong, but it seems that the recognized work of the ten winners of the award all fits into two or three of the possible research areas for which the prize may be rewarded. Why do people think that this is the case?

BEGIN QUOTE

1)All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.

2)Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.

Correct me if I'm wrong, but it seems that the recognized work of the ten winners of the award all fits into two or three of the possible research areas for which the prize may be rewarded. Why do people think that this is the case?

END QUOTE

First off, lets see if this is true. Here is a list of all the winners:

Tarjan, Valiant, Razborov, Wigderson, Shor, Sudan, Kleinberg, Spielman, Khot, Daskalakis

Yup, they all seem to be in Algorithms or Complexity.

Speculation as to why:

1) Algorithms and Complexity have problems with short descriptions that can easily be understood: Tarjan proved Planarity was in O(n) time. Valiant defined Sharp-P and showed the Permanent was Sharp-P complete. Hence it is easy to see what they have done. In many of the fields listed, while people have done great work, it may be harder to tell since the questions are not as clean. If my way to do Vision is better than your way to do Vision, that may be hard to prove. And the proof may need to be non-rigorous.

2) If someone does great work in (for example) Logic of Programming Languages, it may not be recognized until she is past 40 years old.

3) This one I am less sure of (frankly I'm not that sure of any of these and invite polite commentary): areas that are more practical may take longer to build and get to work.

But there is still a problem with this. Numerical Analysis and Cryptography would seem to not have these problems.

I invite the reader to speculate

Yup, they all seem to be in Algorithms or Complexity.

Speculation as to why:

1) Algorithms and Complexity have problems with short descriptions that can easily be understood: Tarjan proved Planarity was in O(n) time. Valiant defined Sharp-P and showed the Permanent was Sharp-P complete. Hence it is easy to see what they have done. In many of the fields listed, while people have done great work, it may be harder to tell since the questions are not as clean. If my way to do Vision is better than your way to do Vision, that may be hard to prove. And the proof may need to be non-rigorous.

2) If someone does great work in (for example) Logic of Programming Languages, it may not be recognized until she is past 40 years old.

3) This one I am less sure of (frankly I'm not that sure of any of these and invite polite commentary): areas that are more practical may take longer to build and get to work.

But there is still a problem with this. Numerical Analysis and Cryptography would seem to not have these problems.

I invite the reader to speculate

## Wednesday, June 12, 2019

### Compressing in Moscow

This week finds me in Moscow for a pair of workshops, the Russian Workshop on Complexity and Model Theory and a workshop on Randomness, Information and Complexity. The latter celebrates the lives of Alexander Shen and Nikolay Vereshchagin on their 60th birthdays.

Alexander Shen might be best known in computational complexity for his alternate proof of IP = PSPACE. In 1989, Lund, Fortnow, Karloff and Nisan gave an interactive proof for the permanent, which got the entire polynomial-time hierarchy by Toda's theorem. But we didn't know how to push the protocol to PSPACE, we had a problem keeping degrees of polynomials low. Shamir had the first proof by looking at a specific protocol for PSPACE. Shen had the brilliant but simple idea to use a degree reducing operator, taking the polynomial modulo x

^{2}-x. The three papers appeared back-to-back-to-back in JACM.

Shen and Vereshchagin though made their careers with their extensive work on Kolmogorov complexity and entropy, often together. Vereshchagin and I have co-authored some papers together during our mutual trips to Amsterdam, including on Kolmogorov Complexity with Errors and how to increase Kolmogorov Complexity. My favorite work of Shen and Vereshchagin, which they did with Daniel Hammer and Andrei Romashchenko showed that every linear inequality that holds for entropy also holds for Kolmogorov complexity and vice-versa, the best argument that the two notions of information, one based on distributions, the other based on strings, share strong connections.

Today is Russia Day that celebrates the reestablishment of Russia out of the Soviet Union in 1990. Just like how the British celebrate their succession from the US in 1776 I guess. But I'm celebrating Russia day by honoring these two great Russians. Congrats Sasha and Kolya!

## Saturday, June 08, 2019

### Ray Miller, one of our founders, Passes away at the age of 90

Ray Miller, one of the founders of our field, passed away recently at the age of 90.

He has associations with both GA Tech and The University of Maryland, so both Lance and I have a connection to him. As does Dick Lipton who has posted about him here.

I present two guest blog-posts about him

Ming C Lin

Elizabeth Stevinson Chair of Computer Science

University of Maryland at College Park

Dear CS Alumni and Friends,

We are deeply saddened to learn that Professor Emeritus Ray Miller passed away two nights ago around 9 pm.

A Memorial Service at St. Andrews Lutheran Church (15300 New Hampshire Ave., Silver Spring MD 20905) for Dr. Miller will be held on Saturday, June 15th at 10:30 am.

Dr. Ray Miller received his Ph.D. from University of Illinois in 1957. He was a professor and the former Director of the School of Information and Computer Science at the Georgia Institute of Technology before joining our department in 1988 as the Director of the Center of Excellence in Space Data and Information Science (CESDIS). Dr. Miller was well known for his research on communication protocols, networks, distributed systems, parallel computation, and theory.

In 1970, he became the Fellow of IEEE for the advancement of the theoretical understanding of computation through work in switching theory and theoretical models.

In 1997, he was elevated to be a Fellow of ACM for research contributions to the theory of parallel computation and for his distinguished service to the Computer Science community as an educator and leader.

In 2003, Dr. Miller was designated as a Fellow by the Computer Science Accreditation Board

Dr. Miller was also an AAAS Fellow, and a Charter Member of IEEE Computer Society Golden Core;he received the IEEE Third Millennium Medal in 2000 and ACM Distinguished Service Award in 2002.

Beyond his outstanding research contribution and devotion to education, Dr. Ray Miller has been known for his kindness as a colleague, supportiveness as a mentor, and effectiveness as a leader. Dr. Miller will be forever remembered warmly by his friends, colleagues and students for his dedication and service to our department, the University, and the field of computing at large.

Ben Shneiderman

Emeritus Professor, University of Maryland at College Park.

I was saddened to hear about the death of Ray Miller at age 90. He was a dear colleague who contributed a great deal to computer science and to our department. You can read his 84-page personal memoir at the IEEE Computer Society History Committee website: here.

His memoirs tells his story in detail, describing his research collaborations in computational complexity, parallel algorithms, and program optimization and his leadership roles. You can see more about Ray’s work on his ACM author page: here

He has associations with both GA Tech and The University of Maryland, so both Lance and I have a connection to him. As does Dick Lipton who has posted about him here.

I present two guest blog-posts about him

**Post One**: FromMing C Lin

Elizabeth Stevinson Chair of Computer Science

University of Maryland at College Park

Dear CS Alumni and Friends,

We are deeply saddened to learn that Professor Emeritus Ray Miller passed away two nights ago around 9 pm.

A Memorial Service at St. Andrews Lutheran Church (15300 New Hampshire Ave., Silver Spring MD 20905) for Dr. Miller will be held on Saturday, June 15th at 10:30 am.

Dr. Ray Miller received his Ph.D. from University of Illinois in 1957. He was a professor and the former Director of the School of Information and Computer Science at the Georgia Institute of Technology before joining our department in 1988 as the Director of the Center of Excellence in Space Data and Information Science (CESDIS). Dr. Miller was well known for his research on communication protocols, networks, distributed systems, parallel computation, and theory.

In 1970, he became the Fellow of IEEE for the advancement of the theoretical understanding of computation through work in switching theory and theoretical models.

In 1997, he was elevated to be a Fellow of ACM for research contributions to the theory of parallel computation and for his distinguished service to the Computer Science community as an educator and leader.

In 2003, Dr. Miller was designated as a Fellow by the Computer Science Accreditation Board

*"in recognition of his outstanding professional volunteer contributions to computing sciences**and accreditation”.*Dr. Miller was also an AAAS Fellow, and a Charter Member of IEEE Computer Society Golden Core;he received the IEEE Third Millennium Medal in 2000 and ACM Distinguished Service Award in 2002.

Beyond his outstanding research contribution and devotion to education, Dr. Ray Miller has been known for his kindness as a colleague, supportiveness as a mentor, and effectiveness as a leader. Dr. Miller will be forever remembered warmly by his friends, colleagues and students for his dedication and service to our department, the University, and the field of computing at large.

**Post Two**: FromBen Shneiderman

Emeritus Professor, University of Maryland at College Park.

I was saddened to hear about the death of Ray Miller at age 90. He was a dear colleague who contributed a great deal to computer science and to our department. You can read his 84-page personal memoir at the IEEE Computer Society History Committee website: here.

His memoirs tells his story in detail, describing his research collaborations in computational complexity, parallel algorithms, and program optimization and his leadership roles. You can see more about Ray’s work on his ACM author page: here

This is the best source as he had no Google Scholar page or Wikipedia article that I could find. Ray’s quiet and modest style was a valuable asset, but his contributions come through in his memoir. He describes working with Turing Awardees John Cocke, Fran Allen, John Backus, Dick Karp, and other key figures, so maybe Ray should have received that award too. Ray was also an excellent administrator and leader, who contributed to building the institutions (conferences, ACM, IEEE, etc.) that supported the growth of computer science.

Ray was especially kind to me in the early 1970s, when I was working on my Phd, developing a graph theoretic model of data structures. As Assistant Director of the IBM Mathematical Science Department at the T. J. Watson Labs in Yorktown Heights, NY. This legendary IBM Research Lab was equal to Bell Labs and filled with computing pioneers in hardware, software, and applications.

Ray invited me to give a talk about my work, drawing interest from Arnold Rosenberg, who had been developing related ideas. With Ray’s support I returned for monthly visits with Arnie and Ray to refine my esoteric ideas leading to my May 1973 Phd.

## Thursday, June 06, 2019

### What Happened to the Surprising Theorems?

Twenty-five years ago Peter Shor presented a polynomial-time factoring algorithms for quantum computers. For Peter, it was a simple translation of a quantum algorithm due to Dan Simon. For the rest of us, it was a shock, while we knew quantum could do some seemingly artificial problems exponentially faster, no one expected a natural problem like factoring to fall so quickly. I remember remarking at the time that Shor bought quantum computing twenty years, now I would say fifty.

That may have been the last time I was truly shocked by a theorem in theoretical computer science. I've been shocked by proofs, that Primes are in P, Undirected connectivity in Log space, NEXP not in ACC

In the ten years before Shor we had plenty of surprises, interactive proofs, zero-knowledge proofs, probabilistically checkable proofs, nondeterministic space closed under complementation, hardness versus randomness, the permanent hard for the polynomial-time hierarchy. It seemed to come to a hard stop after Shor.

There have been some mild surprises, the Hadamard isn't rigid, holographic algorithms, the complexity of Nash equilibrium, QIP = PSPACE, and many others. But nothing that has made us rethink the complexity world.

This reflects the maturity of our field. How many shocking theorems have we seen recently in math in general? We're shocked by proofs of the Poincaré conjecture and Fermat's last theorem but both went in the expected direction.

We will have some shocking theorem in the future, maybe Factoring in P or L = NL. To be truly shocked it would have to be something I can't even imagine being true today.

^{0}, Graph Isomorphism in quasi-polynomial time. But the theorems themselves all went in the directions we expected.In the ten years before Shor we had plenty of surprises, interactive proofs, zero-knowledge proofs, probabilistically checkable proofs, nondeterministic space closed under complementation, hardness versus randomness, the permanent hard for the polynomial-time hierarchy. It seemed to come to a hard stop after Shor.

There have been some mild surprises, the Hadamard isn't rigid, holographic algorithms, the complexity of Nash equilibrium, QIP = PSPACE, and many others. But nothing that has made us rethink the complexity world.

This reflects the maturity of our field. How many shocking theorems have we seen recently in math in general? We're shocked by proofs of the Poincaré conjecture and Fermat's last theorem but both went in the expected direction.

We will have some shocking theorem in the future, maybe Factoring in P or L = NL. To be truly shocked it would have to be something I can't even imagine being true today.

## Tuesday, June 04, 2019

### IMU's non-controversial changing the name of the Nevanlinna Prize

(I want to thank Alexander Soifer for supplying me with some of the documents I point to in this post. We should all thank him for getting the ball rolling on changing the name of the Nevanlinna Prize.)

The

The

*Nevanlinna Prize*was essentially a Fields Medal for Theoretical Computer Science. I do not know why it is a*Prize*instead of a*Medal.*
It has been renamed

*The Abacus Medal.*If you want to know why the IMU (International Mathematics Union) thinks the new name is good*but do not**care even a little about why the original name was bad*then see this article: here.
So why is

That would seem like enough to get the name changed. In fact, it makes one wonder why the prize originally had the name.

1) Why the change now? It began when Alexander Soifer came across this information about Nevanlinna while working on his book

He then wrote a letter to the IMU which sponsors the

After a response that lamely said (I paraphrase):

The story has a happy ending: the name was changed. (No, Alexander is not paying for the award.)

2) For a full summary of why the award was originally named Nevanlinna and why it was changed see the article,

*The Nevanlinna Prize*a bad name? In brief, Rolf Nevanlinna was an enthusiastic Nazi sympathizer. How enthused? He served as the chair of the Finish SS recruitment committee.That would seem like enough to get the name changed. In fact, it makes one wonder why the prize originally had the name.

1) Why the change now? It began when Alexander Soifer came across this information about Nevanlinna while working on his book

*The Scholar and the State: In Search of Van der Waerdan*(see here to buy it, see here for a book review column that includes my review of it).He then wrote a letter to the IMU which sponsors the

*Nevanlinna Prize*. The letter is here. Note that Alexander offered to pay for the prize ($15,000 every four years) if that will help get the name changed.After a response that lamely said (I paraphrase):

*Gee, we didn't know. Oh well*. Alex wrote another letter which is here.The story has a happy ending: the name was changed. (No, Alexander is not paying for the award.)

2) For a full summary of why the award was originally named Nevanlinna and why it was changed see the article,

*Yes We Can,*by Alexander Soifer,*in an issue of the journal**Mathematical Competition*s, see here.
3) When is change possible?

Assume Y did X and X is awful (e.g., I assume for most of my readers believing and spreading Nazi propaganda). Assume there is a Y-prize. What does it take to have the name changed?

a) You need someone pushing hard for it. Kudos to Alexander Soifer who started this.

b) There is no really good reason to use that name in the first place.

What was Nevanlinna's contribution to mathematical aspects of computer science? The IMU (International Mathematics Union) internet page answers:

*The prize was named in honors of Rolf Nevanlinna ... who in the 1950's had taken the initiative to the computer organization at Finnish Universities.*

That's all. If there was a Gauss Prize (actually there IS a Gauss Prize) and we later found out that Gauss was X, I doubt we would change the name of the award. Gauss's name is on it since he is a great mathematician.

c) The person on the award is not the one giving the money. If we found out that Nobel was an X, I doubt the name would change since he is paid for it.

d) If the award name is well known then it might not change. Nobel is a good example. I think the Nevanlinna prize is mostly unknown to the public. The Field's medal is better known, though still not that well known. The general public became briefly aware of the Field's medal twice: when it was mentioned in the movie

*Good Will Hunting,*and when Perelman turned it down. Fame is fleeting for both prizes and people.
e) Organizations don't like to change things. Hence X would need to be particularly bad to warrant a name change.

OTHER THOUGHTS

1) Why

*The Abacus Medal*? Perhaps they are worried that if they name it after someone and that someone turns out to be an X they'll have to change it again. I find the explanation given here to be unsatisfying. I find the fact that they make**NO MENTION**of why they are no longer naming it*The**Nevanlinna prize*appalling and insulting.
2) Lets turn to people who get the awards. If someone solved two Millennium problems and clearly deserved a Field's Medal, but was an X, should they be denied the prize on that basis. I would tend to think no (that is, they should get the prize) but it does trouble me. What would happen? I honestly don't know.

3) X will change over time.

## Wednesday, May 29, 2019

### NSF Panels

The government shut down in January led to delays at the National Science Foundation and only recently announcing decisions on grants submitted last fall. For those who successfully received awards, congratulations! For those who didn't, don't take it personally, buckle up and try again.

For those who don't know how the process works, for each grant program, the program directors organize one or more panels which typically meets in person at NSF headquarters in Alexandria, Virginia. A typical panel has about a dozen panelists and twenty or so proposals. Before the panels, each proposal gets at least three reviews by the panelists. Discussions ensue over a day or two, proposals get sorted into categories: Highly Competitive, Competitive, Low Competitive and Not Competitive and then ranked ordered in the top categories.

There are tight rules for Conflict-of-Interest and those who are conflicted have to leave the room during the discussions on those papers.

If you do get asked to serve on a panel, you should definitely do so. You get to see how the process works and help influence funding and research directions in your field. You can't reveal when you serve on a particular panel but you can say "Served on NSF Panels" on your CV.

Panels tend to take proposals that will likely make progress and not take ones less risky. Funding risky proposals is specifically mentioned to the panel but when push comes to shove and there is less funding than worthy proposals, panelists gravitate towards proposals that don't take chances.

Panels are not unlike conference program committees. It didn't always work this way, it used to be more like journal publications. I remember when the program director would send out proposals for outside reviews and then make funding decisions. That gave the program director more discretion to fund a wider variety of proposals.

The NSF budget for computing goes up slowly while the number of academic computer scientists grows at a much larger clip. Until this changes, we'll have more and more worthy proposals unfunded, particularly proposals of bold risky projects. That's the saddest part of all.

For those who don't know how the process works, for each grant program, the program directors organize one or more panels which typically meets in person at NSF headquarters in Alexandria, Virginia. A typical panel has about a dozen panelists and twenty or so proposals. Before the panels, each proposal gets at least three reviews by the panelists. Discussions ensue over a day or two, proposals get sorted into categories: Highly Competitive, Competitive, Low Competitive and Not Competitive and then ranked ordered in the top categories.

There are tight rules for Conflict-of-Interest and those who are conflicted have to leave the room during the discussions on those papers.

If you do get asked to serve on a panel, you should definitely do so. You get to see how the process works and help influence funding and research directions in your field. You can't reveal when you serve on a particular panel but you can say "Served on NSF Panels" on your CV.

Panels tend to take proposals that will likely make progress and not take ones less risky. Funding risky proposals is specifically mentioned to the panel but when push comes to shove and there is less funding than worthy proposals, panelists gravitate towards proposals that don't take chances.

Panels are not unlike conference program committees. It didn't always work this way, it used to be more like journal publications. I remember when the program director would send out proposals for outside reviews and then make funding decisions. That gave the program director more discretion to fund a wider variety of proposals.

The NSF budget for computing goes up slowly while the number of academic computer scientists grows at a much larger clip. Until this changes, we'll have more and more worthy proposals unfunded, particularly proposals of bold risky projects. That's the saddest part of all.

## Monday, May 27, 2019

### separating fact from fiction with the 56% of Americans say Arabic Numerals should not be taught in school

On the excellent TV show Veep there was a subplot about a political candidate (who himself had failed algebra in HS) objecting to Algebra since it was invented by the Muslims. I don't recall the exact line, but he said something like `Math teachers are terrorists'

This was, of course, fiction.

The same week I read that 56% of survey respondents say `

*Arabic Numerals' shouldn't be taught in**Obviously also a fiction. Perhaps a headline from*

__schools'__*The Onion*.

No. The story is true.

See snopes entry on this: here

but also see many FALSE but FUNNY websites:

Sarah Palin wants Arabic Numerals out of the schools: here Funny but false.

Jerry Brown is forcing students in California to learn Arabic Numerals as part of multi-culturism False by funny: here

A website urging us to use Roman Numerals (which Jesus used!) False but funny: here

OKAY, what to make of the truth that really, really, 56% of Americans are against Arab Numerals

1) Bigotry combined with ignorance.

2) Some of the articles I read about this say its a problem with polls and people. There may be some of that, but still worries me.

3) In Nazi Germany (WOW- Goodwin's law popped up rather early!) they stopped teaching relativity because Albert Einstein was Jewish (the story is more complicated than that, see here). That could of course never happen in America now (or could it, see here and here).

4) There is no danger that we will dump Arabic Numerals. I wonder if we will change there name to Freedom Numerals.

5) Ignorance of science is a more immediate problem with the anti-vax people. See here

## Friday, May 24, 2019

### Logic Then and Now

This week I attended the Association of Symbolic Logic North American Annual Meeting in New York City, giving a talk on P v NP.

First I must share the announcement that ASL member Tuna Antinel of Lyon 1 University has been arrested in Turkey for his political beliefs. This website (English version) has details and how to show your support.

I last attended the ASL annual meeting at Notre Dame in 1993 as a young assistant professor. Back then I talked about then recent work using a special kind of generic oracle to make the Berman-Hartmanis isomorphism conjecture true. I remember someone coming up to me after the talk saying how excited they were to see such applications of logic. I'm not a theoretical computer scientist, I'm a applied logician.

I asked at my talk this year and maybe 2-3 people were at that 1993 meeting. The attendance seemed smaller and younger, though that could be my memory playing tricks. I heard that the 2018 meeting in Macomb, Illinois drew a larger crowd. New York is expensive and logicians don't get large travel budgets.

Logic like theoretical computer science has gotten more specialized so I was playing catch up trying to follow many of the talks. Mariya Soskova of Wisconsin talked about enumeration degrees that brought me back to the days I sat in logic classes and talks at the University of Chicago. A set A is enumeration reducible to B if from an enumeration of B you can compute an enumeration of A and Mariya gave a great overview of this area.

I learned about the status of an open problem for Turing reducibility: Is there a non-trivial automorphism of the Turing Degrees? A degree is the equivalence class where each class are the languages all computably Turing-reducible to each other. So the question asks if there is a bijection f mapping degrees to degrees, other than identity, that preserves reducibility or lack thereof.

Here's what's known: There are countably many such automorphisms. There is a definable degree C in the arithmetic hierarchy, such that if f(C) = C then f is the identity. Also if f is the identity on all the c.e.-degrees (those equivalence classes containing a computably enumerable set), then f is the identity on all the degrees. Still open if there is more than one automorphism.

First I must share the announcement that ASL member Tuna Antinel of Lyon 1 University has been arrested in Turkey for his political beliefs. This website (English version) has details and how to show your support.

I last attended the ASL annual meeting at Notre Dame in 1993 as a young assistant professor. Back then I talked about then recent work using a special kind of generic oracle to make the Berman-Hartmanis isomorphism conjecture true. I remember someone coming up to me after the talk saying how excited they were to see such applications of logic. I'm not a theoretical computer scientist, I'm a applied logician.

I asked at my talk this year and maybe 2-3 people were at that 1993 meeting. The attendance seemed smaller and younger, though that could be my memory playing tricks. I heard that the 2018 meeting in Macomb, Illinois drew a larger crowd. New York is expensive and logicians don't get large travel budgets.

Logic like theoretical computer science has gotten more specialized so I was playing catch up trying to follow many of the talks. Mariya Soskova of Wisconsin talked about enumeration degrees that brought me back to the days I sat in logic classes and talks at the University of Chicago. A set A is enumeration reducible to B if from an enumeration of B you can compute an enumeration of A and Mariya gave a great overview of this area.

I learned about the status of an open problem for Turing reducibility: Is there a non-trivial automorphism of the Turing Degrees? A degree is the equivalence class where each class are the languages all computably Turing-reducible to each other. So the question asks if there is a bijection f mapping degrees to degrees, other than identity, that preserves reducibility or lack thereof.

Here's what's known: There are countably many such automorphisms. There is a definable degree C in the arithmetic hierarchy, such that if f(C) = C then f is the identity. Also if f is the identity on all the c.e.-degrees (those equivalence classes containing a computably enumerable set), then f is the identity on all the degrees. Still open if there is more than one automorphism.

## Monday, May 20, 2019

### Notorious L.A.H or Notorious LAH? OR You always need one more proofread

I noticed a while back that even on the nth proofread of a document there are still corrections. So I decided to keep track of how many corrections there are in a paper I was working on. I chose a non-technical paper so that errors-in-the-math would not be the issue. I chose

Guest Column: The Third P =?NP Poll (see here)

that appeared in Lane Hemaspaandra's SIGACT News Complexity Column.

I kept track of the following:

1) Number of corrections. Anything that I changed. Could be style, a new thought, need not be (though could be) an error.

2) Errors. These are things that really need to be corrected, like having `think' instead of `thing' .

Corrections vs Errors, an Example:

If I refer to Lane Hemaspaandra as

If I refer to Lane Hemaspaandra as

I didn't keep track of serious errors vs typos, but after the first 3 proofreads there were no more serious errors--- sort of- --you'll see. Most serious was a f

Here is a history of the number of corrections

1) Lane proofread the first draft. κ corrections where κ is some cardinal between the cardinality of N and the cardinality of 2

Henceforth I omit the word

2) Bill G: 81 corrections, 29 of which were errors.

3) Clyde: 64 corrections, of which 17 were errors.

4) Bill G: 40 corrections, of which 21 were errors (I had added a new section causing more errors)

5) Clyde: 30 corrections of which 10 were errors.

6) Bill G: 24 corrections of which 6 were errors.

7) Clyde: 18 corrections of which 8 were errors.

8) David Sekora (A CS grad student at Maryland who at one time wanted to be an English Major): f15 corrections of which 15 were errors. Really! Typos dagnabbit! (Spell check thinks that dagnabbit is spelled wrong. Um---in that case what is the correct spelling?)

9) Nathan Grammel (A CS grad student at Maryland) :6 corrections of which 3 were errors.

10) Bill G, proofreading backwards, a paragraph at a time: 29 corrections of which 5 were errors.

11) Justin Hontz, an ugrad who TAs for me: 10 corrections of which 7 were errors.

12) Karthik Abinav, a grad student in theory at Maryland: 2 corrections both of which were errors. Was this the end or are there still issues?

13) Josh Twitty, an ugrad who TAs for me: 0 corrections. YEAH!

14) Dan Smolyak, an ugrad CS and Eco major:4 corrections, all 4 errors.

15) Clyde Kruskal :20 corrections, 10 of which were errors. To call them errors seems wrong when he corrects

16) Backwards Bill G again: 28 corrections, 14 of which were errors. Again, the errors were minor. Most of the errors were relatively recent. As an example, if I list out topics in math like:

a) Group Theory, Set Theory, and Ramsey Theory

then I am supposed to use capital letters, but if I say in prose

Lance Fortnow thinks that the techniques used will be group theory, set theory, and Ramsey theory

then only the R in Ramsey Theory is in caps. Makes me glad I'm in math.

17) Lane got penultimate proofread. Lane found 75 (yes 75 WOW) of which 66 (yes 66 WOW) were errors. Many of these were spacing and latex things that I would never have noticed (indeed- I didn't notice) and most readers would not have noticed (hmmm- how do I know that?) but only an editor could catch (hmmm- when I've edited the book review column and now the open problems column and I never found more than 10 errors). So when all is said and done: KUDOS to Lane! And My point was that you can never get all the errors out. On that I am correct. I wonder if there are still errors? Yeah, but at most 10. However, I said that BEFORE giving it to Lane.

18) Stephen Fenner, the editor of SIGACT news got FINAL proofread. He found that I spelled his name wrong . How many errors are left? I would bet at most 10. I would bet that I would lose that bet.

------------------------------------------------------------------------------------------------------------

Why after multiple proofreadings are there still errors? (My spell check thinks proofreadings is not a word. Maybe my spell check is worried that if people get documents proofread a lot then they won't be needed anymore. This blog post refutes that thought.)

1) An error can occur from a correction. This caused a massive problem with another paper. Lane's next column will be by me and co-authors on The Muffin Problem. We had all kinds of problems with the colors and sizes--- Massive Magenta Muffins or Miniature Magenta Muffins? Sizes gone wild! Again Kudos to my proofreaders and to Lane for catching this rather important error.

2) If some passage is added late in the process it will surely have errors.

3) An error correction may clear away the brush so you can see other errors.

4) With LaTeX (or Word for some) we have the ability to get things perfect. So there is no cost to keeping on perfecting things. This lead so many corrections that are not errors.

5) I know of an adviser who would first say change A to B, and later change B back to A. (None of that happened with the paper discussed above).

Are errors inevitable? Clyde Kruskal tells me that his father Martin Kruskal, as a teenager, read Courant and Robbins book

MARTIN MOTHER: Martin claims to have found errors in your book.

COURANT: (laughs) There are errors in every book.

Courant was so impressed that ten (or so) years later Courant became Martin's PhD adviser.

Guest Column: The Third P =?NP Poll (see here)

that appeared in Lane Hemaspaandra's SIGACT News Complexity Column.

I kept track of the following:

1) Number of corrections. Anything that I changed. Could be style, a new thought, need not be (though could be) an error.

2) Errors. These are things that really need to be corrected, like having `think' instead of `thing' .

Corrections vs Errors, an Example:

If I refer to Lane Hemaspaandra as

*Notorious L.A.N*that is a correction and an error, as he is Notorous*L.A.H.*If I refer to Lane Hemaspaandra as

*Notorious L.A.H*and decide to change it to*LAH*that is a correction that is not an error.I didn't keep track of serious errors vs typos, but after the first 3 proofreads there were no more serious errors--- sort of- --you'll see. Most serious was a f

**onts-gone-wild thing where half the paper was in boldface.**Here is a history of the number of corrections

1) Lane proofread the first draft. κ corrections where κ is some cardinal between the cardinality of N and the cardinality of 2

^{N}. Its value depends on which model of set theory you are in. (My spellchecker thinks that cardinality is not a word. I checked and I am spelling it correctly but perhaps it's one of those things where I stare at it too much and keep misreading it.)Henceforth I omit the word

*proofread*as it is understood2) Bill G: 81 corrections, 29 of which were errors.

3) Clyde: 64 corrections, of which 17 were errors.

4) Bill G: 40 corrections, of which 21 were errors (I had added a new section causing more errors)

5) Clyde: 30 corrections of which 10 were errors.

6) Bill G: 24 corrections of which 6 were errors.

7) Clyde: 18 corrections of which 8 were errors.

8) David Sekora (A CS grad student at Maryland who at one time wanted to be an English Major): f15 corrections of which 15 were errors. Really! Typos dagnabbit! (Spell check thinks that dagnabbit is spelled wrong. Um---in that case what is the correct spelling?)

9) Nathan Grammel (A CS grad student at Maryland) :6 corrections of which 3 were errors.

10) Bill G, proofreading backwards, a paragraph at a time: 29 corrections of which 5 were errors.

11) Justin Hontz, an ugrad who TAs for me: 10 corrections of which 7 were errors.

12) Karthik Abinav, a grad student in theory at Maryland: 2 corrections both of which were errors. Was this the end or are there still issues?

13) Josh Twitty, an ugrad who TAs for me: 0 corrections. YEAH!

14) Dan Smolyak, an ugrad CS and Eco major:4 corrections, all 4 errors.

*Error*sounds too strong. For example, one of them was to replace ?. with ? Yes, its an error, but not that important. It DOES point to his carefulness as a proofreader.15) Clyde Kruskal :20 corrections, 10 of which were errors. To call them errors seems wrong when he corrects

*Group theory'*to*Group Theory*. None of these corrections were caused by prior comments. I think all of the errors were in the paper early on, undetected until now!16) Backwards Bill G again: 28 corrections, 14 of which were errors. Again, the errors were minor. Most of the errors were relatively recent. As an example, if I list out topics in math like:

a) Group Theory, Set Theory, and Ramsey Theory

then I am supposed to use capital letters, but if I say in prose

Lance Fortnow thinks that the techniques used will be group theory, set theory, and Ramsey theory

then only the R in Ramsey Theory is in caps. Makes me glad I'm in math.

17) Lane got penultimate proofread. Lane found 75 (yes 75 WOW) of which 66 (yes 66 WOW) were errors. Many of these were spacing and latex things that I would never have noticed (indeed- I didn't notice) and most readers would not have noticed (hmmm- how do I know that?) but only an editor could catch (hmmm- when I've edited the book review column and now the open problems column and I never found more than 10 errors). So when all is said and done: KUDOS to Lane! And My point was that you can never get all the errors out. On that I am correct. I wonder if there are still errors? Yeah, but at most 10. However, I said that BEFORE giving it to Lane.

18) Stephen Fenner, the editor of SIGACT news got FINAL proofread. He found that I spelled his name wrong . How many errors are left? I would bet at most 10. I would bet that I would lose that bet.

------------------------------------------------------------------------------------------------------------

Why after multiple proofreadings are there still errors? (My spell check thinks proofreadings is not a word. Maybe my spell check is worried that if people get documents proofread a lot then they won't be needed anymore. This blog post refutes that thought.)

1) An error can occur from a correction. This caused a massive problem with another paper. Lane's next column will be by me and co-authors on The Muffin Problem. We had all kinds of problems with the colors and sizes--- Massive Magenta Muffins or Miniature Magenta Muffins? Sizes gone wild! Again Kudos to my proofreaders and to Lane for catching this rather important error.

2) If some passage is added late in the process it will surely have errors.

3) An error correction may clear away the brush so you can see other errors.

4) With LaTeX (or Word for some) we have the ability to get things perfect. So there is no cost to keeping on perfecting things. This lead so many corrections that are not errors.

5) I know of an adviser who would first say change A to B, and later change B back to A. (None of that happened with the paper discussed above).

Are errors inevitable? Clyde Kruskal tells me that his father Martin Kruskal, as a teenager, read Courant and Robbins book

*What is Mathematics*and found some errors in it. Martin's mother didn't believe him and marched him over to Courant's house:MARTIN MOTHER: Martin claims to have found errors in your book.

COURANT: (laughs) There are errors in every book.

Courant was so impressed that ten (or so) years later Courant became Martin's PhD adviser.

## Thursday, May 16, 2019

### Getting Through the Clutter

My daughter showed up at her doctor's office the other day and found it closed. She complained that she had received all these alerts, texts, emails, voicemails, reminding her of the appointment and then they weren't there when she showed up. She had stopped reading the alerts, the last of which said the appointment need to be rescheduled.

We all get too many alerts. I just assume many of the emails I send to faculty never get read or remembered. I get many requests to mention conferences, workshop and other theory happenings on this blog because nobody knows how to get the word out through the clutter. If we followed through on all these requests, this blog would become clutter itself.

Back in 2009, Nikhil Devanur and I wrote a paper trying to capture this phenomenon using complexity. Building on Levin's notion of universal search, the unawareness of a string x in an environment E is the amount of time needed to generate x from a context c given an oracle for E. Levin showed that one can have a universal algorithm, only a constant time slower to generate x than any other algorithm. One should think of E as the sum total of all our knowledge including search engines on the Internet. Technically we should have used the term "attention" instead of "awareness".

One example is using a calendar as part of your environment, E, that reminds you of an event, x, on that date, c. We use calendars, contact lists, emails searches and many other ways to keep strings we need to remember. Advertisers try to alter your E to get the unawareness of x down.

One of these papers that didn't go far because we didn't have great applications for the definition. But it follows a good philosophy, when life seems complex, model it.

We all get too many alerts. I just assume many of the emails I send to faculty never get read or remembered. I get many requests to mention conferences, workshop and other theory happenings on this blog because nobody knows how to get the word out through the clutter. If we followed through on all these requests, this blog would become clutter itself.

Back in 2009, Nikhil Devanur and I wrote a paper trying to capture this phenomenon using complexity. Building on Levin's notion of universal search, the unawareness of a string x in an environment E is the amount of time needed to generate x from a context c given an oracle for E. Levin showed that one can have a universal algorithm, only a constant time slower to generate x than any other algorithm. One should think of E as the sum total of all our knowledge including search engines on the Internet. Technically we should have used the term "attention" instead of "awareness".

One example is using a calendar as part of your environment, E, that reminds you of an event, x, on that date, c. We use calendars, contact lists, emails searches and many other ways to keep strings we need to remember. Advertisers try to alter your E to get the unawareness of x down.

One of these papers that didn't go far because we didn't have great applications for the definition. But it follows a good philosophy, when life seems complex, model it.

## Monday, May 13, 2019

### Ronald Graham's other large number. Well---- it was large in 1964 anyway.

Graham's number (see here) was at one time the largest number to appear in a math proof.

a) GN was an upper bound on a problem in Ramsey theory. There are now better upper bounds, see here. These better upper bounds are still large- Hales-Jewitt-Large, but that's already much smaller than the original GN.

b) Other proofs now have numbers even larger than GN. For example Harvey Friedman's work on the finite versions of Kruskal's Tree Theorem. (There may be other cases- if you know some then let me know in the comments.)

Since my dept recently moved buildings I found old papers that I had not looked at in years. One of them was

by Erdos and Graham

(see here)

So I began reading it and came across a result of Graham from 1964 that used large numbers. No where near as large as GN, but I found it interesting that Graham was involved with large numbers way back then.

Here is the problem:

A

a(n) = a(n-1) + a(n-2).

Clearly such a sequence is determined by a(0) and a(1).

QUESTION: Does there exists a(0) and a(1) that are rel prime such that the sequence has only composite numbers?

By ingenuity and some computing power Graham found YES. For how the got the numbers see here. The numbers are of course in the paper, and how they got them is interesting, but I present them anyway. Hope I don't make a typo:

a(0) = 1786772701928802632268715130455793

a(1) = 1059683225053915111058164141686995

The paper Old and New... says its open if there is a smaller pair of numbers, I do not know if it is still open. If you know, let us all know in the comments!

These numbers seem small today since we have modern computers that can store and manipulate them easily. Were the considered large numbers in 1964? They were never called Graham Numbers which is probably just as well since that honor lay ahead.

a) GN was an upper bound on a problem in Ramsey theory. There are now better upper bounds, see here. These better upper bounds are still large- Hales-Jewitt-Large, but that's already much smaller than the original GN.

b) Other proofs now have numbers even larger than GN. For example Harvey Friedman's work on the finite versions of Kruskal's Tree Theorem. (There may be other cases- if you know some then let me know in the comments.)

Since my dept recently moved buildings I found old papers that I had not looked at in years. One of them was

*Old and New Problems and Results in Combinatorial Number Theory*by Erdos and Graham

(see here)

So I began reading it and came across a result of Graham from 1964 that used large numbers. No where near as large as GN, but I found it interesting that Graham was involved with large numbers way back then.

Here is the problem:

A

*Lucas Sequence*is a sequence that obeysa(n) = a(n-1) + a(n-2).

Clearly such a sequence is determined by a(0) and a(1).

QUESTION: Does there exists a(0) and a(1) that are rel prime such that the sequence has only composite numbers?

By ingenuity and some computing power Graham found YES. For how the got the numbers see here. The numbers are of course in the paper, and how they got them is interesting, but I present them anyway. Hope I don't make a typo:

a(0) = 1786772701928802632268715130455793

a(1) = 1059683225053915111058164141686995

The paper Old and New... says its open if there is a smaller pair of numbers, I do not know if it is still open. If you know, let us all know in the comments!

These numbers seem small today since we have modern computers that can store and manipulate them easily. Were the considered large numbers in 1964? They were never called Graham Numbers which is probably just as well since that honor lay ahead.

## Thursday, May 09, 2019

### Multiple Provers

Just over thirty years ago on May 5, 1989, I defended my PhD Thesis Complexity-Theoretic Aspects of Interactive Proof Systems. It's starts off with a parable for interactive proof systems.

Part of my thesis explored the class MIP where we had multiple Pulus (provers) on different mountain tops unable to communicate. The news was disappointing, we failed to get a PSPACE upper bound for MIP, only NEXP (nondeterministic exponential time) and our proof that two provers sufficed relied on a bad assumption on how errors get reduced when you run multiple protocols in parallel. Later Babai, Lund and myself showed MIP = NEXP and Ran Raz showed parallel repetition does reduce the error sufficiently.

Back in the 80's we didn't even imagine the possibility that the Pulus had shared entangled quantum bits. Does the entanglement allow the provers to cheat or can the entanglement allow them to prove more things? Turns out to be much more, as a new result by Anand Natarajan and John Wright shows that MIP*, MIP with classical communication, classical verifier and two provers with previously entangled quantum bits, can compute everything in NEEXP, nondeterministic double exponential time. This is only a lower bound for MIP*, possibly one can do even more.

Neat to see my three-decade old thesis explored ideas that people are still thinking about today.

Victor, a venture capitalist, had everything a man could desire: money, women and power. But he felt something missing. He decided he lacked knowledge. So Victor packed up his bags and headed to the Himalayas in search of ultimate truths. The natives pointed Victor to a tall mountain and mentioned rumors of a great man full of wisdom. Victor, who smartly brought some climbing equipment, tackled the mountain until he reached a small cave near the summit. Victor found the great Pulu, grand guru of all that is known. Victor inquired to some ultimate truths and Pulu responded,Without the coins, one gets the complexity class NP. My thesis didn't answer the last question, but by the end of the year, Shamir building on work of Lund, Fortnow, Karloff and Nisan showed this class IP was equal to PSPACE, the problems we could solve in a polynomial amount of memory.I will teach you but you must not trust my words. Victor agreed and found he learned much even though he had to verify all the sayings of the great Pulu. Victor though lacked complete happiness and he asked if he could learn knowledge beyond what he could learn in this manner. The grand guru replied,You may ask and I will answer. Victor pondered this idea for a minute and said, "Since you know all that is known, why can you not predict my questions?" A silence reigned over the mountain for a short while until the guru finally spoke,You must use other implements, symbols of your past life. Victor thought for a while and reached into his backpack and brought out some spare change he had unwittingly carried with him. Even the great Pulu can not predict the flip of a coin. He started flipping the coins to ask the guru and wondered what can I learn now?

Part of my thesis explored the class MIP where we had multiple Pulus (provers) on different mountain tops unable to communicate. The news was disappointing, we failed to get a PSPACE upper bound for MIP, only NEXP (nondeterministic exponential time) and our proof that two provers sufficed relied on a bad assumption on how errors get reduced when you run multiple protocols in parallel. Later Babai, Lund and myself showed MIP = NEXP and Ran Raz showed parallel repetition does reduce the error sufficiently.

Back in the 80's we didn't even imagine the possibility that the Pulus had shared entangled quantum bits. Does the entanglement allow the provers to cheat or can the entanglement allow them to prove more things? Turns out to be much more, as a new result by Anand Natarajan and John Wright shows that MIP*, MIP with classical communication, classical verifier and two provers with previously entangled quantum bits, can compute everything in NEEXP, nondeterministic double exponential time. This is only a lower bound for MIP*, possibly one can do even more.

Neat to see my three-decade old thesis explored ideas that people are still thinking about today.

## Monday, May 06, 2019

### Thoughts on the recent Jeopardy streak (SPOILERS)

James Holzhauer has won 22 consecutive games of Jeopardy and has made around 1.6 million dollars. Nice work if you can get it. Here are some thoughts no this

1) Before James H the records for number of consecutive games was, and still is, Ken Jennings winning 74 in a row, and second place was 20. I was surprised that Ken was that much better than the competition.

2) Before James H the record for amount of money in normal play (not extra from, say, tournament of champions or losing to a computer) was around 400,000. I was surprised that Ken was that much better than the competition.

3) James is obliterating the records for most wins in a single game. He holds the top 12 records for this. This is due to his betting A LOT on the daily doubles and the final jeop, as well as of course answering so many questions right.

4) One reason players in Jeopardy don't have long streaks is fatigue. The actually play

5 games a day, two days of the week. James H is getting a break since he has two weeks off now since they will soon have the Teachers Tournament. This could work either way--- he gets a break or he loses being in-the-zone.

5) James strategy is:

a) Begin with the harder (and more lucrative) questions.

b) Bet A LOT on the daily doubles (which are more likely to be in the more lucrative questions) and almost always go into final jeop with more than twice your opponent (He failed to do this only once.)

c) Bet A LOT on Final Jeop- though not enough so that if you lose you lose the game. I think he's gotten every Final Jeop question right.

For more on his strategy see this article by Oliver Roeder in Nate Silvers Blog: here

6) I tend to think of this as being a high-risk, high-reward strategy and thus it is unlikely he will beat Ken Jennings, but every time he wins that thought seems sillier and sillier. While we are here, how likely is it that someone will beat Ken Jennings? In an article before all of this Ben Morrison in Nate Silvers Blog wrote that it was quite likely SOMEONE would break Ken J's record, see here.

7) OKAY, how does James H compare to Ken J? According to Oliver Roeder in Nate Silvers Blog,

here, they are similar in terms of percent of questions answered right, but James H bets so much more (bets better?) which is why he is getting so much money. I'll be curious to see a head-to-head contest at some point. But to the issue at hand, they don't give James H that good a chance to break Ken J's record.

8) Jeop used to have a 5-game limit. Maybe that was a good idea- its not that interesting seeing the same person with the same strategy win 22 in a row. Also, the short-talk-with-Alex T-- James is running out of interesting things to say. I wonder what Alex did with Ken J after 50 games.

``So Ken, I hear you're good at Jeopardy''

9) Misc: Ken J was the inspiration for IBM to do Watson.

10) Will future players use James Strategy? Note that you have to be REALLY GOOD in the first place for it to help you. Maybe a modified version where you go for the lucrative questions and bet a lot on Daily Doubles (more than people have done in the past) when its an area you know really well (I'll take Ramsey Theory for $2000.)

11) I used to DVR and watch Jeop but didn't mind if I was a few behind. Now I have to stay on top of it so articles like those pointed to above don't give me a spoiler.

12) My prediction: He will beat Ken Jenning for money but not for number-of-games. I have no real confidence in these predictions.

1) Before James H the records for number of consecutive games was, and still is, Ken Jennings winning 74 in a row, and second place was 20. I was surprised that Ken was that much better than the competition.

2) Before James H the record for amount of money in normal play (not extra from, say, tournament of champions or losing to a computer) was around 400,000. I was surprised that Ken was that much better than the competition.

3) James is obliterating the records for most wins in a single game. He holds the top 12 records for this. This is due to his betting A LOT on the daily doubles and the final jeop, as well as of course answering so many questions right.

4) One reason players in Jeopardy don't have long streaks is fatigue. The actually play

5 games a day, two days of the week. James H is getting a break since he has two weeks off now since they will soon have the Teachers Tournament. This could work either way--- he gets a break or he loses being in-the-zone.

5) James strategy is:

a) Begin with the harder (and more lucrative) questions.

b) Bet A LOT on the daily doubles (which are more likely to be in the more lucrative questions) and almost always go into final jeop with more than twice your opponent (He failed to do this only once.)

c) Bet A LOT on Final Jeop- though not enough so that if you lose you lose the game. I think he's gotten every Final Jeop question right.

For more on his strategy see this article by Oliver Roeder in Nate Silvers Blog: here

6) I tend to think of this as being a high-risk, high-reward strategy and thus it is unlikely he will beat Ken Jennings, but every time he wins that thought seems sillier and sillier. While we are here, how likely is it that someone will beat Ken Jennings? In an article before all of this Ben Morrison in Nate Silvers Blog wrote that it was quite likely SOMEONE would break Ken J's record, see here.

7) OKAY, how does James H compare to Ken J? According to Oliver Roeder in Nate Silvers Blog,

here, they are similar in terms of percent of questions answered right, but James H bets so much more (bets better?) which is why he is getting so much money. I'll be curious to see a head-to-head contest at some point. But to the issue at hand, they don't give James H that good a chance to break Ken J's record.

8) Jeop used to have a 5-game limit. Maybe that was a good idea- its not that interesting seeing the same person with the same strategy win 22 in a row. Also, the short-talk-with-Alex T-- James is running out of interesting things to say. I wonder what Alex did with Ken J after 50 games.

``So Ken, I hear you're good at Jeopardy''

9) Misc: Ken J was the inspiration for IBM to do Watson.

10) Will future players use James Strategy? Note that you have to be REALLY GOOD in the first place for it to help you. Maybe a modified version where you go for the lucrative questions and bet a lot on Daily Doubles (more than people have done in the past) when its an area you know really well (I'll take Ramsey Theory for $2000.)

11) I used to DVR and watch Jeop but didn't mind if I was a few behind. Now I have to stay on top of it so articles like those pointed to above don't give me a spoiler.

12) My prediction: He will beat Ken Jenning for money but not for number-of-games. I have no real confidence in these predictions.

## Thursday, May 02, 2019

### The Next Chapter

I had a fantastic time at Georgia Tech over the last seven years working with an incredible faculty, staff and students in the School of Computer Science. This is a special place and I enjoyed watching the school, the institute and the City of Atlanta grow and evolve.

After I tweeted the news yesterday, Bill Cook reminded me that

After I tweeted the news yesterday, Bill Cook reminded me that

Illinois Tech was the long-time home of Karl Menger, the first person to pose the problem of determining the complexity of the TSP. Now you can settle it!I wouldn't bet on my settling the complexity of traveling salesman even if I didn't have a college to run. But it goes to remind us that wherever you go in life, P and NP will be right there waiting for you.

## Sunday, April 28, 2019

###
x^{3} + y^{3} + z^{3} = 33 has a solution in Z. And its big!

Consider the following problem:

Given k, a natural number, determine if there exists x,y,z INTEGERS such that x

It is not obvious that this problem is decidable (I think it is but have not been able to find an exact statement to that affect; however, if it was not solvable, I would know that, hence it is solvable. If you know a ref give it in the comments.)

If k≡ 4,5 mod 9 then mod arguments easily show there is no solution. Huisman showed that if k≤ 1000, k≡1,2,3,6,7,8 mod 9 and max(|x|,|y|,|z|) ≤ 10

33, 42, 114, 165, 390, 579, 627, 633, 732, 795, 906, 921, 975

then there was a solution. For those on the list it was unknown.

Recently Booker (not Cory Booker, the candidate for prez, but Andrew Booker who I assume is a math-computer science person and is not running for prez) showed that

x

DOES have a solution in INTEGERS. It is

x= 8,866,128,975,287,528

y=-8,778,405,442,862,239

z=-2,736,111,468,807,040

does that make us more likely or less likely to think that

x

has a solution? How about =114, etc, the others on the list?

Rather than say what I think is true (I have no idea) here is what I HOPE is true: that the resolution of these problems leads to some mathematics of interest.

Given k, a natural number, determine if there exists x,y,z INTEGERS such that x

^{3}+y^{3}+z^{3}=k.It is not obvious that this problem is decidable (I think it is but have not been able to find an exact statement to that affect; however, if it was not solvable, I would know that, hence it is solvable. If you know a ref give it in the comments.)

If k≡ 4,5 mod 9 then mod arguments easily show there is no solution. Huisman showed that if k≤ 1000, k≡1,2,3,6,7,8 mod 9 and max(|x|,|y|,|z|) ≤ 10

^{15}and k is NOT one of33, 42, 114, 165, 390, 579, 627, 633, 732, 795, 906, 921, 975

then there was a solution. For those on the list it was unknown.

Recently Booker (not Cory Booker, the candidate for prez, but Andrew Booker who I assume is a math-computer science person and is not running for prez) showed that

x

^{3}+ y^{3}+ z^{3}=33DOES have a solution in INTEGERS. It is

x= 8,866,128,975,287,528

y=-8,778,405,442,862,239

z=-2,736,111,468,807,040

does that make us more likely or less likely to think that

x

^{3}+ y^{3}+ z^{3}=42has a solution? How about =114, etc, the others on the list?

Rather than say what I think is true (I have no idea) here is what I HOPE is true: that the resolution of these problems leads to some mathematics of interest.

## Thursday, April 25, 2019

### Geo-Centric Complexity

An interesting discussion during Dagstuhl last month about the US-centric view of theory. Bad enough that all talks and papers in an international venue are in English but we also have

- Manhattan Distance. How are foreigners supposed to know about the structure of streets in New York? What's wrong with grid distance?
- Las Vegas Algorithms. I found this one a little unfair, after all Monte Carlo algorithms came first. Still today might not Macau algorithms make sense?
- Arthur-Merlin Games. A British reference by a Hungarian living in the US (László Babai who also coined Las Vegas algorithms). Still the Chinese might not know the fables. Glad the Europeans don't remember the Pat and Vanna terminology I used in my first STOC talk.
- Alice and Bob. The famous pair of cryptographers but how generically American can you get. Why not Amare and Bhati?

I have two minds here. We shouldn't alienate or confuse those who didn't grow up in an Anglo-American culture. On the other hand, I hate to have to try and make all terminology culturally neutral, you'd just end up with technical and ugly names, like P and NP.

## Monday, April 22, 2019

### Quiz Show Scandals/Admissions Scandal/Stormy Daniels/Beer names:being a lawyer would drive me nuts!!!!!!

0) Charles van Doren (see here) passed away recently. For those who don't know he he was (prob most of you) he was one of the contestants involved in RIGGED quiz shows in the 1950's. While there was a Grand Jury Hearing about Quiz Shows being rigged, nobody went to jail since TV was new and it was not clear if rigging quiz shows was illegal. Laws were then passed to make them it illegal.

So why are today's so-called reality shows legal? I ask non-rhetorically.

(The person he beat in a rigged game show- Herb Stempel (see here) is still alive.)

1) The college admissions scandal. I won't restate the details and how awful it is since you can get that elsewhere and I doubt I can add much to it. One thing I've heard in the discussions about it is a question that is often posted rhetorically but I want to pose for real:

There are people whose parents give X dollars to a school and they get admitted even though they are not qualified. Why is that legal?

I ask that question without an ax to grind and without anger. Why is out-right bribery of this sort legal?

Possibilities:

a) Its transparent. So being honest about bribery makes it okay?

b) My question said `even though they are not qualified' - what if they explicitly or implicitly said `having parents give money to our school is one of our qualifications'

c) The money they give is used to fund scholarships for students who can't afford to go. This is an argument for why its not immoral, not why its not illegal.

But here is my question: Really, what is the legal issue here? It still seems like bribery.

2) Big Oil gives money to congressman Smith, who then votes against a carbon tax. This seems like outright bribery

Caveat:

a) If Congressman Smith is normally a anti-regulation then he could say correctly that he was given the money because they agree with his general philosophy, so it's not bribery.

b) If Congressman smith is normally pro-environment and has no problem with voting for taxes then perhaps it is bribery.

3) John Edwards a while back and Donald Trump now are claiming (not quite) that the money used to pay off their mistress to be quiet is NOT a campaign contribution, but was to keep the affair from his wife. (I don't think Donald Trump has admitted the affair so its harder to know what his defense is). But lets take a less controversial example of `what is a campaign contribution'

I throw a party for my wife's 50th birthday and I invite Beto O'Rourke and many voters and some Dem party big-wigs to the party. The party costs me $50,000. While I claim it's for my wife's bday it really is for Beto to make connections to voters and others. So is that a campaign contribution?

4) The creators of HUGE ASS BEER are suing GIANT ASS BEER for trademark infringement. I am not making this up- see here

---------------------------------------------------------

All of these cases involve ill defined questions (e.g., `what is a bribe'). And the people arguing either side are not unbiased. The cases also illustrate why I prefer mathematics: nice clean questions that (for the most part) have answers. We may have our biases as to which way they go, but if it went the other way we would not sue in a court of law.

So why are today's so-called reality shows legal? I ask non-rhetorically.

(The person he beat in a rigged game show- Herb Stempel (see here) is still alive.)

1) The college admissions scandal. I won't restate the details and how awful it is since you can get that elsewhere and I doubt I can add much to it. One thing I've heard in the discussions about it is a question that is often posted rhetorically but I want to pose for real:

There are people whose parents give X dollars to a school and they get admitted even though they are not qualified. Why is that legal?

I ask that question without an ax to grind and without anger. Why is out-right bribery of this sort legal?

Possibilities:

a) Its transparent. So being honest about bribery makes it okay?

b) My question said `even though they are not qualified' - what if they explicitly or implicitly said `having parents give money to our school is one of our qualifications'

c) The money they give is used to fund scholarships for students who can't afford to go. This is an argument for why its not immoral, not why its not illegal.

But here is my question: Really, what is the legal issue here? It still seems like bribery.

2) Big Oil gives money to congressman Smith, who then votes against a carbon tax. This seems like outright bribery

Caveat:

a) If Congressman Smith is normally a anti-regulation then he could say correctly that he was given the money because they agree with his general philosophy, so it's not bribery.

b) If Congressman smith is normally pro-environment and has no problem with voting for taxes then perhaps it is bribery.

3) John Edwards a while back and Donald Trump now are claiming (not quite) that the money used to pay off their mistress to be quiet is NOT a campaign contribution, but was to keep the affair from his wife. (I don't think Donald Trump has admitted the affair so its harder to know what his defense is). But lets take a less controversial example of `what is a campaign contribution'

I throw a party for my wife's 50th birthday and I invite Beto O'Rourke and many voters and some Dem party big-wigs to the party. The party costs me $50,000. While I claim it's for my wife's bday it really is for Beto to make connections to voters and others. So is that a campaign contribution?

4) The creators of HUGE ASS BEER are suing GIANT ASS BEER for trademark infringement. I am not making this up- see here

---------------------------------------------------------

All of these cases involve ill defined questions (e.g., `what is a bribe'). And the people arguing either side are not unbiased. The cases also illustrate why I prefer mathematics: nice clean questions that (for the most part) have answers. We may have our biases as to which way they go, but if it went the other way we would not sue in a court of law.

## Thursday, April 18, 2019

### Physics of Everday Life

Based on Scott's review, I read through Stephen Pinker's Enlightenment Now. I can't top Scott's exposition of the book, but it is pretty incredible how far humanity has gone when you step back to look at the big picture.

One line intrigued me, one that Pinker credits to a book called The Big Picture by Sean Carroll

Is it possible we could do more in everyday life if we knew more physics? I'd certainly use a teleporter in everyday life.

And is the statement even true today? We all use public key cryptography, even to read this blog. It's not completely clear if we understand the physics enough to know how or if large-scale quantum computers capable of breaking those systems can be built.

Everday life is relative.

One line intrigued me, one that Pinker credits to a book called The Big Picture by Sean Carroll

The laws of physics underlying everyday life (that is excluding extreme values of energy and gravitation like black holes, dark matter and the Big Bang) areHasn't this statement almost always been true, in the sense that the leading minds would make this claim at many times in history. The ancient Greeks probably believed they understood physics that underlies everyday life. So did physicists after Newton. Life back then not today. My everyday life involves using a GPS device that requires understanding relativistic effects and computer chips that needed other scientific advances.completely known.

Is it possible we could do more in everyday life if we knew more physics? I'd certainly use a teleporter in everyday life.

And is the statement even true today? We all use public key cryptography, even to read this blog. It's not completely clear if we understand the physics enough to know how or if large-scale quantum computers capable of breaking those systems can be built.

Everday life is relative.

## Monday, April 15, 2019

### Good article, terrible headline

About a month ago (after my P NP poll appeared) I got email from Jacob Aron asking me some questions about it. One thing he was excited about was that the number of people who thought P vs NP would be solved in the next decade had increased from 11% to 22%. I told him that this also surprised me and

here

You may notice that the headline is

1) So, article fine, headline terrible.

2) A more honest headline would be

Headline is

I think the article was about how a Turing Machine with an oracle for HALT can solve HALT. (If I am wrong about this let me know in the comments and I'll correct it.)

4) More bad science:here

Headline

5) I invite you to add your own.

6) OKAY, so why is the press SO BAD at reporting on our field? And is it just our field? I have one explanation, though I am sure there are many.

Our field is about showing the LIMITS of computation. Its hard to make that sexy so they ... lie? exaggerate. They themselves don't really understand our field? Note:

To explain to someone who does not really know CS why its important to have an oracle where BQP is not in PH is hard

To explain this to someone IN CS but not in Theory is still hard!

To explain this to someone IN CS and even in CS Theory, but not complexity (e.g., algorithms) might be hard, though it may depend on the person.

7) The old saying is `I don't care if you get the story wrong so long as you spell my name right' And indeed, they did spell my name right. So there is that! But more seriously and less about me or even the article that refers to my poll--- is it bad that science reporting is often wrong?

*there had been no major advances to warrant that increase.**Then the article came out. Here is the pointer to the headline and the first few lines, the rest is behind a paywall*

here

You may notice that the headline is

*We could solve the biggest problem in math in the next decade**I emailed Jacob to send me the article, which he did. The article was fine, even quoting me as saying that the increase of people who thought it would be solved soon was unwarranted.*

1) So, article fine, headline terrible.

2) A more honest headline would be

*The Complexity Theory Community slightly more optimistic about when P vs NP will be resolved for no apparent reason.**3) More bad science:here*

Headline is

*Turing's Oracle: The computer that goes beyond logic*I think the article was about how a Turing Machine with an oracle for HALT can solve HALT. (If I am wrong about this let me know in the comments and I'll correct it.)

4) More bad science:here

Headline

*Finally, a problem that only Quantum Computers will ever be able to solve.**This was about the oracle such that BQP is not in PH. Really!*

5) I invite you to add your own.

6) OKAY, so why is the press SO BAD at reporting on our field? And is it just our field? I have one explanation, though I am sure there are many.

Our field is about showing the LIMITS of computation. Its hard to make that sexy so they ... lie? exaggerate. They themselves don't really understand our field? Note:

To explain to someone who does not really know CS why its important to have an oracle where BQP is not in PH is hard

To explain this to someone IN CS but not in Theory is still hard!

To explain this to someone IN CS and even in CS Theory, but not complexity (e.g., algorithms) might be hard, though it may depend on the person.

7) The old saying is `I don't care if you get the story wrong so long as you spell my name right' And indeed, they did spell my name right. So there is that! But more seriously and less about me or even the article that refers to my poll--- is it bad that science reporting is often wrong?

## Thursday, April 11, 2019

### Elwyn Berlekamp Died April 9, 2019

Elwyn R. Berlekamp entered this world on Sept 6, 1940, and left it on April 9, 2019. Wikipedia calls him

Here are some of the things he worked on:

1) Factoring Polynomials over large finite fields (See here from 1970)

I quote the abstract

2) Combinatorial Games. He co-wrote the classic Winning Ways For your Mathematical Plays with John Conway and Richard Guy. These books are like NIM on steroids. They are FUN and have some serious math in them. (My blog system thinks Combinatorial is not a word. It is wrong)

3) Combinatorial Games. He was very interested in Go via Combinatorial Games. I have an article (alas, not online) by him entitled Introductory Overview of Mathematical Go Endgames. There has been further work on this that is on the web, see here. I wonder what ERB would think of the ALPHAGO program.

4) Berlekamp-Massey Algorithm finds the shortest linear feedback shift register for a given binary output sequence.

5) Berlekamp-Welch Algorithm efficiently corrects errors in Reed-Solomon codes. (Berlekamp also wrote a book on Algebraic Coding Theory.)

6) I didn't know about his work in finance until I began writing this, but the following quote from Wikipedia is impressive

*An American Mathematician*which seems to narrow to me. He worked on a variety of topics which were Math, Comp Sci, finance, and likely others --- if you know any that I missed leave comments.Here are some of the things he worked on:

1) Factoring Polynomials over large finite fields (See here from 1970)

I quote the abstract

This paper reviews some of the known algorithms for factoring polynomials over finite fields and presents a new deterministic procedure for reducing the problem of factoring an arbitrary polynomial over the Galois field GP(p^{m}) to the problem of finding roots in GF(p) of certain other polynomials over GP(p). The amount of computation and the storage space required by these algorithms are algebraic in both the degree of the polynomial to be factored and the log of the order of the field.

Certain observations on the application of these methods to factorization of polynomials over the rational integers are also included.I think he meant what we would call polynomial time when he says algebraic.

2) Combinatorial Games. He co-wrote the classic Winning Ways For your Mathematical Plays with John Conway and Richard Guy. These books are like NIM on steroids. They are FUN and have some serious math in them. (My blog system thinks Combinatorial is not a word. It is wrong)

3) Combinatorial Games. He was very interested in Go via Combinatorial Games. I have an article (alas, not online) by him entitled Introductory Overview of Mathematical Go Endgames. There has been further work on this that is on the web, see here. I wonder what ERB would think of the ALPHAGO program.

4) Berlekamp-Massey Algorithm finds the shortest linear feedback shift register for a given binary output sequence.

5) Berlekamp-Welch Algorithm efficiently corrects errors in Reed-Solomon codes. (Berlekamp also wrote a book on Algebraic Coding Theory.)

6) I didn't know about his work in finance until I began writing this, but the following quote from Wikipedia is impressive

In 1989, Berlekamp purchased the largest interest in a trading company named Axcom Trading Advisors. After the firm's futures trading algorithms were rewritten, Axcom's Medallion Fund had a return (in 1990) of 55%, net of all management fees and transaction costs. The fund has subsequently continued to realize annualized returns exceeding 30% under management by James Harris Simons and his Renaissance Technologies Corporation.

## Sunday, April 07, 2019

### Problems with a point- NOT a plug, just some thoughts on books and book writing

Problems with a Point: Exploring Math and Computer Science by Gasarch and Kruskal, available on amazon here, came out a while back and I plugged it in my blog post here. Regan-Lipton reviewed it here.

This post is NOT a plug, its random points about the book and book writing

1) On Feb 28 I checked my Amazon ranking twice. At noon it was at 400,000 and at 4:00PM it was at roughly 80,000. Either (a) someone bought a copy, or (b) the number of sales of such books is so small that amazon does rankings arbitrarily. A counter argument to that: Bounded Queries in Recursion Theory, which I wrote in 1998 and, despite an awesome review on amazon that I wrote, sold literally 0 copies last year, is ranked 5,000,000. So even within the low numbers there is SOME logic to the rankings. (Update- Today, April 7, PWAP is at 800,000 and BQIRT is at 12,000,000. I suspect that nothing has changed with BQIRT and the 5,000,000 or 12,000,000 is arbitrary. As for PWAP--- since its still a new book the ranking might mean something. Oh well- fame is fleeting!)

1.5) At one time my book was 29th in the category

2) I emailed many people about it and many people are buying it. Some tell me that the cover looks great so YES, they are judging a book by its cover. My mom asked how it was selling. I told her that (a) about 20 people told me they bought it, and (b) I doubt that's how JK Rowling responds when she's asked how her Harry Potter books are selling.

3) Many people could actually read this book and would care about its content. When I wrote Bounded Queries in Recursion Theory (with Georgia Martin) I didn't email a lot of people about it since it was rather specialized.

4) Some people said `Gee bill, we didn't know you were working on a book'. That's true- I didn't talk about it much. Clyde knew (my co-author, so he had to know), Lance knew. Darling knew. Mom Knew. That might be about it. Why so few? see next point.

5) I had heard there are two kinds of people:

I chose to be the second type.

6) When I got an advanced copy I couldn't stop reading it. Ego? Disbelieve? Some of each?

7) ADVICE for book writers: just get it done. You can polish it later but first get it done so you have something to polish.

8) More advice: You can never have too many proofreaders. Even though its been gone over a lot I still read it and find minor things to fix OR think `gee why didn't I include BLAH in the book'

9) Title: `Problems with a Point' I came up with very early and I liked it a lot. World Scientific (our publisher) pointed out, correctly, that we need a subtitle since PWAP didn't really tell the reader whats in it. I wanted to make it alliterative but it was hard. Here were some possibilities

PWAP: Mathematical Musing Made Magnificent (can replace Musings with Morsels)

PWAP: Mathematical Meditations and Computer Science Cogitations

PWAP: Mathematical Musings and the Math that Makes those Musings Matter

All in all I am happy with the non-alliterative, but informative

PWAP: Exploring Math and Computer Science.

10) Odd thing I learned- if I quote someone who made a comment on my blog or emailed me I needed permission. Actually I doubt I NEEDED it but the laws here are murky so I got it just to make sure. That explains this blog from a while back: here

11) ADVICE for book writers: Write the preface last since then you have a better idea of why you wrote the book.

This post is NOT a plug, its random points about the book and book writing

1) On Feb 28 I checked my Amazon ranking twice. At noon it was at 400,000 and at 4:00PM it was at roughly 80,000. Either (a) someone bought a copy, or (b) the number of sales of such books is so small that amazon does rankings arbitrarily. A counter argument to that: Bounded Queries in Recursion Theory, which I wrote in 1998 and, despite an awesome review on amazon that I wrote, sold literally 0 copies last year, is ranked 5,000,000. So even within the low numbers there is SOME logic to the rankings. (Update- Today, April 7, PWAP is at 800,000 and BQIRT is at 12,000,000. I suspect that nothing has changed with BQIRT and the 5,000,000 or 12,000,000 is arbitrary. As for PWAP--- since its still a new book the ranking might mean something. Oh well- fame is fleeting!)

1.5) At one time my book was 29th in the category

*Teen and Young Adult Computer Programming.*Really? While teens and young adults might read it, I don't think of it as geared towards them (only one use of*OMG*and two uses of*awesome*in the entire book). And as for Computer Programming... uh... no.2) I emailed many people about it and many people are buying it. Some tell me that the cover looks great so YES, they are judging a book by its cover. My mom asked how it was selling. I told her that (a) about 20 people told me they bought it, and (b) I doubt that's how JK Rowling responds when she's asked how her Harry Potter books are selling.

3) Many people could actually read this book and would care about its content. When I wrote Bounded Queries in Recursion Theory (with Georgia Martin) I didn't email a lot of people about it since it was rather specialized.

4) Some people said `Gee bill, we didn't know you were working on a book'. That's true- I didn't talk about it much. Clyde knew (my co-author, so he had to know), Lance knew. Darling knew. Mom Knew. That might be about it. Why so few? see next point.

5) I had heard there are two kinds of people:

*Those that talk about writing a book but don't do it**Those that write a book instead of talking about it*.I chose to be the second type.

6) When I got an advanced copy I couldn't stop reading it. Ego? Disbelieve? Some of each?

7) ADVICE for book writers: just get it done. You can polish it later but first get it done so you have something to polish.

8) More advice: You can never have too many proofreaders. Even though its been gone over a lot I still read it and find minor things to fix OR think `gee why didn't I include BLAH in the book'

9) Title: `Problems with a Point' I came up with very early and I liked it a lot. World Scientific (our publisher) pointed out, correctly, that we need a subtitle since PWAP didn't really tell the reader whats in it. I wanted to make it alliterative but it was hard. Here were some possibilities

PWAP: Mathematical Musing Made Magnificent (can replace Musings with Morsels)

PWAP: Mathematical Meditations and Computer Science Cogitations

PWAP: Mathematical Musings and the Math that Makes those Musings Matter

All in all I am happy with the non-alliterative, but informative

PWAP: Exploring Math and Computer Science.

10) Odd thing I learned- if I quote someone who made a comment on my blog or emailed me I needed permission. Actually I doubt I NEEDED it but the laws here are murky so I got it just to make sure. That explains this blog from a while back: here

11) ADVICE for book writers: Write the preface last since then you have a better idea of why you wrote the book.

## Thursday, April 04, 2019

### Cuckoo Cycles

*Guest Blog by John Tromp (Thanks for the opportunity, Lance)*

In the past few months, cycle finding has become one of the most widely run graph theory problems. An estimated quarter-to-half million GPUs are constantly looking for 42-cycles in random bipartite graphs on a billion or more nodes. Customized chips have been designed to perform this specific application an order of magnitude more efficiently, thanks to integrating as much as 1GB of SRAM on a single chip, where Intel/AMD CPUs top out at 64 MB.

The purpose of all this? To compete for block rewards in various cryptocurrencies using the Cuckoo Cycle Proof of Work.

It is named after the Cuckoo Hashtable, in which each data item can be stored at two possible locations, one in each of two arrays. A hash function maps the pair of item and choice of array to its location in that array. When a newly stored item finds both its locations already occupied, one is kicked out and replaced by the new item. This forces the kicked-out item to move to its alternate location, possibly kicking out yet another item. Problems arise if the corresponding Cuckoo graph, a bipartite graph with array locations as nodes and item location pairs as edges, has cycles. While n items, whose edges form a cycle, could barely be stored in the table, any n+1st item mapping within the same set of locations (a chord in the cycle) would not fit, as the pigeonhole principle tells us.

In the Proof-of-Work problem, we typically set n ≥ 29, and use edge indices (items) 0 .. N-1, where N = 2

^{n}. The endpoints of edge i are (siphash(i|0) % N, siphash(i|1) % N), with siphash being a popular keyed hash function. A solution is a cycle of length L in this Cuckoo graph, where typically L = 42.

While looking for cycles takes a lot of time and memory, solutions can be instantly verified. This sets Cuckoo Cycle apart from many other memory hard proofs of work that require computing a memory hard hash function, such as scrypt, both for proving and verifying. Some luck is also required to find a 42-cycle; Slides 18 and 19 of this Grincon.US presentation sketch a proof of the

**Cuckoo Cycle Theorem**: The expected number of L-cycles tends to 1/L

While there is a known linear time-memory trade-off, it suffers from a large constant factor penalty. A large bounty is offered on the project webpage for better trade-offs.

Cuckoo Cycle solvers spend nearly all cycles on edge trimming; identifying and removing edges that end in a leaf node (of degree 1), as such edges cannot be part of a cycle. Trimming rounds alternate between the two node partitions. This can be done using N bits for the edge bitmap, and N log

_{2}(3) bits for the (truncated) node degrees. The random accesses to the latter, while not a problem for custom chips using enough SRAM, cause large memory latencies on commodity hardware. These can be mostly avoided by storing remaining edges explicitly and (bucket) sorting them to determine node degrees. Despite using an order of magnitude more memory than the edge bitmap, this is still over 4x faster.

The fraction f

_{i}of remaining edges after i trimming rounds (in the limit as N goes to infinity) appears to obey the

**Cuckoo Cycle Conjecture:**f

_{i}= a

_{i-1}* a

_{i}, where a

_{-1}= a

_{0}= 1, and a

_{i+1}= 1 - e

^{-ai}

_{i}could equivalently be defined as the fraction of edges whose first endpoint is the middle of a (not necessarily simple) path of length 2i.

So far I have only been able to prove the conjecture for i ≤ 3. For instance, for i = 1, the probability that an edge endpoint is not the endpoint of any other edge is (1-1/N)

^{N-1}~ 1/e. Here's hoping someone finds an elegant proof...

Subscribe to:
Posts (Atom)