Sunday, June 30, 2019

A proof that 22/7 - pi > 0 and more

My father was a High School English teacher who did not know much math. As I was going off to college, intending to major in math, he gave me the following sage advice:

1) Take Physics as well as Math since Physics and Math go well together. This was good advice. I took the first year of Physics for Physics Majors, and I later took a senior course in Mechanics since that was my favorite part of the first year course. Kudos to Dad!

2) π is exactly 22/7. I knew this was not true, but I also knew that I had no easy way to show him this. In fact, I wonder if I could have proven it myself back then.

Problem A-1 on the 1968 Putnam exam:

Prove 22/7 - π = ∫01 (x4(1-x)4)/(1+ x2 )dx

(I can easily do his by partial fractions and remembering that ∫ 1/(1+x^2) dx = tan-1x which is tan inverse, not 1/tan. See here.)

(ADDED LATER---I have added conjectures on getting integrals of the form above except with 4 replaced by any natural number. Be the first on your block to solve my conjectures! It has to be easier than the Sensitivity Conjecture!)

Let n ∈ N which we will choose later. By looking at the circle that is inscribed in a regular n-polygon (n even) one finds that

n tan(π/n) > π

So we seek an even value of n such that

n tan(π/n) < 22/7

Using Wolfram alpha the smallest such n is 92.

Would that convince Dad? Would he understand it? Probably not. Oh well.

Some misc points.

1) While working on this post I originally wanted to find tan(π/27) by using the half-angle formula many times, and get an exact answer in terms of radicals, rather than using Wolfram Alpha.

a) While I have lots of combinatorics books, theory of comp books, and more Ramsey Theory books than one person should own in my house, I didn't have a SINGLE book with any trig in it.

b) I easily found it on the web:

tan(x/2) = sqrt( (1-cos x)/(1+cos x) ) = sin x/(1+cos x) = (1-cos x)/(sin x).

None of these seems like it would get me a nice expression for tan(π/27). But I don't know. Is there a nice expression for tan(π/2k) ? If you know of one then leave a polite comment.

2) I assumed that there was a more clever and faster way to do the integral. I could not find old Putnam exams and their solutions the web (I'm sure they are there someplace! --- if you know then comment politely with a pointer). So I got a book out of the library The William Lowell Putnam Mathematical Competition Problems and Solutions 1965--1984 by Alexanderson, Klosinski, and Larson. Here is the clever solution:

The standard approach from Elementary Calculus applies.

Not as clever as I as hoping for.

3) I also looked at the integral with 4 replaced by 1,2,3,4,...,16. The results are in the writeup I pointed to before. It looks like I can use this sequence to get upper and lower bound on pi, ln(2), pi+2ln(2), and pi-2ln(2). I have not proven any of this. But take a look! And as noted above I have conjectures!

4) When I looked up INSCRIBING a circle in a regular n-polygon, Google kept giving me CIRCUMSCRIBING. Why? I do not know but I can speculate. Archimedes had a very nice way of using circumscribed circles to approximate pi. Its on youtube here. Hence people are used to using circumscribed rather than inscribed circles.

Thursday, June 27, 2019

FCRC 2019

 Georgia Tech FCRC Participants
I'm heading home from the 2019 ACM Federated Computing Research Conference in Phoenix, a collection of computer science meetings including STOC, COLT and EC.

Geoff Hinton and Yann LeCun gave their Turing award lectures, their co-winner Yoshua Bengio not in attendance. Hinton talked about how machine learning triumphed over symbolic AI. LeCun argued that under uncertainty, one should learn the distribution instead of just the average. If you want more, just watch it yourself.

To get to the STOC lectures you go up and down escalators and pass through ISCA (Computer Architecture) and PLDI (Programming Languages). It's like you are going up the computing stack until you reach algorithms and complexity.

The conference center was just two blocks from Chase Field so we could take in a Diamondbacks baseball game. They opened the roof because the temperature dropped into the double digits. Last night, Paul McCartney played at an arena just a block from the conference center, but instead I hung out at an Uber reception for the EC conference.

Let me mention a best paper awardee, The Reachability Problem for Petri Nets is Not Elementary by Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jerome Leroux and Filip Mazowiecki. In a Petri net you have a list of vectors of integers and an initial and final vector. You start with the initial vector and can add any of the other vectors nondeterministically as often as you like as long as no coordinate goes negative. Can you get to the final vector? This problem was known to be computable in "Ackermannian" time and EXPSPACE-hard. This paper shows the problem is not elementary, i.e. not solvable in running time a tower of 2's to the n. A recent result shows Petri Nets reachability is primitive recursive for fixed dimensions.

Avi Wigderson gave the Knuth Prize lecture exploring deep connections between mathematics and algorithms. Hopefully the video will be online soon.

STOC next year in Chicago, EC as part of the Game Theory Congress in Budapest.

Monday, June 24, 2019

Are you smarter than a 5th grade amoeba?

Amoeba finds approx solution to TSP in linear time:here.

Over the years we have seen models of computation that claim to solve NPC or other hard problems quickly. I ask non-rhetorically and with and open mind how they have panned out.

In no particular order:

1) Parallelism. For solving problems faster YES. For speeding up how to solve NPC problems I think YES. For making P=NP somehow NO.  Even so, parallel computers have been a definite practical success.

2) Quantum Computing. Will they factor large numbers anytime soon? Ever? Should we view the effort to build them as an awesome and insightful Physics experiment? Are there any problems that they are NOW doing faster? Is Quantum Crypto (I know, not the same thing) actually used? Will other things of interest come out of the study of quantum computing? It already has, see here.

3) DNA computing. Did that lead to practical solutions to NPC problems? I do not think it did. Did that lead to interesting math? Interesting biology? Interesting science?  I do not know.

4) Autistic  computing for finding primes: see here. Oliver Sacks, the neurologist ,claimed that two autistic twin brothers could generate large primes quickly. This story was never put to a rigorous test and may not be quite right.

5) Amoeba computing: Too early to tell. The article seems to say it succeeded on 8 cities

The problem with all of these non-standard models of computing is SCALE.  And the more powerful classic computers get, the harder it is for these nonstandard models to compete.

Are these models interesting even if they don't end up getting us fast algorithms? They can be:

1) Do they lead to mathematics of interest? (Quantum- Yes, Parallelism- Yes)

2) Did they inspire algorithms for classical computers? (Quantum- Yes)

3) Do they give insight into other fields? (Quantum for Physics yes, DNA-computing for bio-??)

4) Have they ACTUALLY sped up  up computations in meaningful ways for problems we care about (Parallelism  has)

If  you know of any result which I missed

(e.g.,

Amoeba-computing giving insight into evolution,

Autistic computing being used by the NSA to find primes,

DNA computing  leading to interesting mathematics)

Thursday, June 20, 2019

The Urban/Rural Collegiality Divide

Just a reminder that Grigory Yaroslavtsev has taken over the Theory Jobs Spreadsheet. Check out who is moving where and let everyone know where you will continue your research career.

In 1999 when I considered leaving the University of Chicago for NEC Research I had a conversation with Bob Zimmer, then VP of Research and current U of C President. Zimmer said it was a shame I didn't live in Hyde Park, the Chicago South Side neighborhood where the university resides, and thus not fully connected with the school. At the time I didn't fully understand his point and did leave for NEC, only to return in 2003 and leave again in 2008. I never did live in Hyde Park.

Recently I served on a review committee for a computer science department in a rural college town. You couldn't help but notice the great collegiality among the faculty. Someone told me their theory that you generally get more faculty collegiality in rural versus urban locations. Why?

In urban locations faculty tend to live further from campus, to get bigger homes and better schools, and face more traffic. They are likely to have more connections with people unconnected with the university. There are more consulting opportunities in large cities, a larger variety of coffee houses to hang out in and better connected airports make leaving town easier. Particularly in computer science, where you can do most of your research remotely, faculty will find themselves less likely to come in every day and you lose those constant informal connections with the rest of the faculty.

This is a recent phenomenon, even going back to when I was a young faculty you needed to come to the office for access to research papers, better computers to write papers, good printers. Interactions with students and colleagues is always better in person but in the past the methods of electronic communication proved more limiting.

The University of Chicago helped create and promote its own neighborhood and ran a very strong private school with reduced tuition for faculty children. Maybe my life would have been different had I immersed myself in that lifestyle.

Or maybe we should go the other extreme. If we can find great ways to do on-line meetings and teaching, why do we need the physical university at all?

Monday, June 17, 2019

Why does the Nevalina Prize (now Abacus) got to Algorithms/Complexity people

In my post about the Nevanlinna prize  name change (see here) one of my readers raised a different question about the prize:

BEGIN QUOTE

So there's one of my main questions about the prize answered (or at least resolved). The second remains. The IMU's website(which still refers to the Nevanlinna Prize) says that it is awarded "for outstanding contributions in Mathematical Aspects of Information Sciences including:"

1)All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.

2)Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.

Correct me if I'm wrong, but it seems that the recognized work of the ten winners of the award all fits into two or three of the possible research areas for which the prize may be rewarded. Why do people think that this is the case?

END QUOTE

First off, lets see if this is true. Here is a list of all the winners:

Tarjan, Valiant, Razborov, Wigderson, Shor, Sudan, Kleinberg, Spielman, Khot, Daskalakis

Yup, they all seem to be in Algorithms or Complexity.

Speculation as to why:

1) Algorithms and Complexity have problems with short descriptions that can easily be understood: Tarjan proved Planarity was in O(n) time. Valiant defined Sharp-P and showed the Permanent was Sharp-P complete. Hence it is easy to see what they have done. In many of the fields listed, while people have done great work, it may be harder to tell since the questions are not as clean.  If my way to do Vision is better than your way to do Vision, that may be hard to prove. And the proof  may need to be non-rigorous.

2) If someone does great work in (for example) Logic of Programming Languages, it may not be recognized until she is past 40 years old.

3) This one I am less sure of (frankly I'm not that sure of any of these and invite polite commentary): areas that are more practical may take longer to build and get to work.

But there is still a problem with this. Numerical Analysis and Cryptography would seem to not have these problems.

I invite the reader to speculate

Wednesday, June 12, 2019

Compressing in Moscow

This week finds me in Moscow for a pair of workshops, the Russian Workshop on Complexity and Model Theory and a workshop on Randomness, Information and Complexity. The latter celebrates the lives of Alexander Shen and Nikolay Vereshchagin on their 60th birthdays.

Alexander Shen might be best known in computational complexity for his alternate proof of IP = PSPACE. In 1989, Lund, Fortnow, Karloff and Nisan gave an interactive proof for the permanent, which got the entire polynomial-time hierarchy by Toda's theorem. But we didn't know how to push the protocol to PSPACE, we had a problem keeping degrees of polynomials low. Shamir had the first proof by looking at a specific protocol for PSPACE. Shen had the brilliant but simple idea to use a degree reducing operator, taking the polynomial modulo x2-x. The three papers appeared back-to-back-to-back in JACM.

Shen and Vereshchagin though made their careers with their extensive work on Kolmogorov complexity and entropy, often together. Vereshchagin and I have co-authored some papers together during our mutual trips to Amsterdam, including on Kolmogorov Complexity with Errors and how to increase Kolmogorov Complexity. My favorite work of Shen and Vereshchagin, which they did with Daniel Hammer and Andrei Romashchenko showed that every linear inequality that holds for entropy also holds for Kolmogorov complexity and vice-versa, the best argument that the two notions of information, one based on distributions, the other based on strings, share strong connections.

Today is Russia Day that celebrates the reestablishment of Russia out of the Soviet Union in 1990. Just like how the British celebrate their succession from the US in 1776 I guess. But I'm celebrating Russia day by honoring these two great Russians. Congrats Sasha and Kolya!

Saturday, June 08, 2019

Ray Miller, one of our founders, Passes away at the age of 90

Ray Miller, one of the founders of our field, passed away recently at the age of 90.

He has associations with both GA Tech and The University of Maryland, so both Lance and I have a connection to him. As does Dick Lipton who has posted about him here.

I present two guest blog-posts about him

Post One: From

Ming C Lin

Elizabeth Stevinson Chair of Computer Science

University of Maryland at College Park

Dear CS Alumni and Friends,

We are deeply saddened to learn that Professor Emeritus Ray Miller passed away two nights ago around 9 pm.

A Memorial Service at St. Andrews Lutheran Church (15300 New Hampshire Ave., Silver Spring MD  20905) for Dr. Miller will be held on Saturday, June 15th at 10:30 am.

Dr. Ray Miller received his Ph.D. from University of Illinois in 1957.  He was a professor and the former Director of the School of Information and Computer Science at the Georgia Institute of Technology before joining our department in 1988 as the Director of the Center of Excellence in Space Data and Information Science (CESDIS).   Dr. Miller was well known for his research on communication protocols, networks, distributed systems, parallel computation, and theory.

In 1970, he became the Fellow of IEEE for the advancement of the theoretical understanding of computation through work in switching theory and theoretical models.

In 1997, he was elevated to be a Fellow of ACM for research contributions to the theory of parallel computation and for his distinguished service to the Computer Science community as an educator and leader.

In 2003, Dr. Miller was designated as a Fellow by the Computer Science Accreditation Board

"in recognition of his outstanding professional volunteer contributions to computing sciences and accreditation”.

Dr. Miller was also an AAAS Fellow,  and a Charter Member of IEEE Computer Society Golden Core;he received the IEEE Third Millennium Medal in 2000 and ACM Distinguished Service Award in 2002.

Beyond his outstanding research contribution and devotion to education, Dr. Ray Miller has been known for his kindness as a colleague, supportiveness as a mentor, and effectiveness as a leader. Dr. Miller will be forever remembered warmly by his friends, colleagues and students for his dedication and service to our department, the University, and the field of computing at large.

Post Two: From

Ben Shneiderman

Emeritus Professor, University of Maryland at College Park.

I was saddened to hear about the death of Ray Miller at age 90.  He was a dear colleague who contributed a great deal to computer science and to our department.  You can read his 84-page personal memoir at the IEEE Computer Society History Committee website: here.

His memoirs tells his story in detail, describing his research collaborations in computational complexity, parallel algorithms, and program optimization and his leadership roles. You can see more about Ray’s work on his ACM author page: here

This is the best source as he had no Google Scholar page or Wikipedia article that I could find. Ray’s quiet and modest style was a valuable asset, but his contributions come through in his memoir. He describes working with Turing Awardees John Cocke, Fran Allen, John Backus, Dick Karp, and other key figures, so maybe Ray should have received that award too.  Ray was also an excellent administrator and leader, who contributed to building the institutions (conferences, ACM, IEEE, etc.) that supported the growth of computer science.

Ray was especially kind to me in the early 1970s, when I was working on my Phd, developing a graph theoretic model of data structures.  As Assistant Director of the IBM Mathematical Science Department at the T. J. Watson Labs in Yorktown Heights, NY. This legendary IBM Research Lab was equal to Bell Labs and filled with computing pioneers in hardware, software, and applications.

Ray invited me to give a talk about my work, drawing interest from Arnold Rosenberg, who had been developing related ideas. With Ray’s support I returned for monthly visits with Arnie and Ray to refine my esoteric ideas leading to my May 1973 Phd.

Ray’s kindness as a colleague and supportiveness as a mentor will always be remembered warmly. Here are a few photos of Ray giving a talk in the CS Department, probably in 1985: here, here, and

Thursday, June 06, 2019

What Happened to the Surprising Theorems?

Twenty-five years ago Peter Shor presented a polynomial-time factoring algorithms for quantum computers. For Peter, it was a simple translation of a quantum algorithm due to Dan Simon. For the rest of us, it was a shock, while we knew quantum could do some seemingly artificial problems exponentially faster, no one expected a natural problem like factoring to fall so quickly. I remember remarking at the time that Shor bought quantum computing twenty years, now I would say fifty.

That may have been the last time I was truly shocked by a theorem in theoretical computer science. I've been shocked by proofs, that Primes are in P, Undirected connectivity in Log space, NEXP not in ACC0, Graph Isomorphism in quasi-polynomial time. But the theorems themselves all went in the directions we expected.

In the ten years before Shor we had plenty of surprises, interactive proofs, zero-knowledge proofs, probabilistically checkable proofs, nondeterministic space closed under complementation, hardness versus randomness, the permanent hard for the polynomial-time hierarchy. It seemed to come to a hard stop after Shor.

There have been some mild surprises, the Hadamard isn't rigid, holographic algorithms, the complexity of Nash equilibrium, QIP = PSPACE, and many others. But nothing that has made us  rethink the complexity world.

This reflects the maturity of our field. How many shocking theorems have we seen recently in math in general? We're shocked by proofs of the PoincarĂ© conjecture and Fermat's last theorem but both went in the expected direction.

We will have some shocking theorem in the future, maybe Factoring in P or L = NL. To be truly shocked it would have to be something I can't even imagine being true today.

Tuesday, June 04, 2019

IMU's non-controversial changing the name of the Nevanlinna Prize

(I want to thank Alexander Soifer for supplying me with some of the documents I point to in this post. We should all thank him for getting the ball rolling on changing the name of the Nevanlinna Prize.)

The Nevanlinna Prize was essentially a Fields Medal for Theoretical Computer Science.  I do not know why it is a Prize instead of a Medal.

It has been renamed The Abacus Medal. If you want to know why the IMU (International Mathematics Union) thinks the new name is good but do not care even a little about why the original name was bad then see this article: here.

So why is The Nevanlinna Prize a bad name? In brief, Rolf Nevanlinna was an enthusiastic Nazi sympathizer. How enthused? He served as the chair of the Finish SS recruitment committee.

That would seem like enough to get the name changed. In fact, it makes one wonder why the prize originally had the name.

1) Why the change now?  It began when Alexander Soifer came across this information about Nevanlinna while working on his book

The Scholar and the State: In Search of Van der Waerdan (see here to buy it, see here for a book review column that includes my review of it).

He then wrote a letter to the IMU which sponsors the Nevanlinna Prize. The letter is here. Note that Alexander offered to pay for the prize (\$15,000 every four years) if that will help get the name changed.

After a response that lamely said (I paraphrase): Gee, we didn't know. Oh well. Alex wrote another letter which is here.

The story has a happy ending: the name was changed.  (No, Alexander is not paying for the award.)

2) For a full summary of why the award was originally named Nevanlinna  and why it was changed see the article, Yes We Can,  by Alexander Soifer, in an issue of the journal Mathematical Competitions, see here.

3) When is change possible?

Assume Y did X and X is awful (e.g., I assume for most of my readers believing and spreading Nazi propaganda). Assume there is a Y-prize. What does it take to have the name changed?

a) You need someone pushing hard for it. Kudos to Alexander Soifer who started this.

b) There is no really good reason to use that name in the first place.

What was Nevanlinna's contribution to mathematical aspects of computer science? The IMU (International Mathematics Union) internet page answers:

The prize was named in honors of Rolf Nevanlinna ... who in the 1950's had taken the initiative to the computer organization at Finnish Universities.

That's all. If there was a Gauss Prize (actually there IS a Gauss Prize) and we later found out that Gauss was X, I doubt we would change the name of the award. Gauss's name is on it since he is a great mathematician.

c) The person on the award is not the one giving the money. If we found out that Nobel was an X,  I doubt the name would change since he is paid for it.

d) If the award name is well known then it might not change. Nobel is a good example. I think the Nevanlinna prize is mostly unknown to the public. The Field's medal is better known, though still not that well known. The general public became briefly aware of the Field's medal twice: when it was mentioned in the movie Good Will Hunting, and when Perelman turned it down. Fame is fleeting for both prizes and people.

e) Organizations don't like to change things. Hence X would need to be particularly bad to warrant a name change.

OTHER THOUGHTS

1) Why The Abacus Medal? Perhaps they are worried that if they name it after someone and that someone turns out to be an X they'll have to change it again. I find the explanation given here to be unsatisfying. I find the fact that they make NO MENTION of why they are no longer naming it The Nevanlinna prize appalling and insulting.

2) Lets turn to people who get the awards. If someone solved two Millennium problems and clearly deserved a Field's Medal, but was an X, should they be denied the prize on that basis. I would tend to think no (that is, they should get the prize) but it does trouble me. What would happen?  I honestly don't know.

3) X will change over time.