Sunday, September 20, 2020

Baseball can go on forever, it doesn't just seem that way

 Most games have some way to make sure they cannot go on forever.

1) Chess: I had thought there was a 50-move rule and a 3-times-same-position rule, but its a byte more complicated than that, see here. There is also a chess clock. Suffice to say, Chess can never on forever (though it may seem like it does). 

2) NIM: Eventually all of the stones are gone. There may be more complicated versions where you can add some stones, but in those versions I suspect that there is some parameter that goes to 0.

3) Basketball, Football, Hockey, Soccer: These all have a clock so they are time limited. For overtime there are also rules that make sure the game cannot go on forever. Or maybe its just very rare: what if the Superb Owl (spelled that way to avoid lawsuits, see here) is tied 0-0 at the end of the four quarters and goes into overtime and... nobody scores... ever. Could the game go on forever or would the referees declare it a tie? In the regular season there are ties, but in the in the superb owl? Actually this may be more a problem in the playoffs since you need to determine who goes to the next round.

4) Take your favorite game. I would bet dollars to doughnuts (what an odd phrase---see here for more about the phrase) that there is some mechanism to make sure the game ends. An exception that Darling pointed out to me: If in Gin Rummy both players are terrible then the game can go on forever. This is probably true for other games as well and actually makes the question into two questions (a) will a game terminate no matter what the players do, and (b) (not sure how to formalize) will a game terminate if both players are trying to win and are making reasonable moves.

You may have noticed that in item 3 I left out Baseball. There is no clock in baseball. So one way the game can go on forever is to have a tie and extra innings and nobody scores. I think the umpire has the authority to call it a tie. (Incidentally, the shortened baseball season has a new extra inning rule---each inning starts with a runner on second. See here,) When Lance read an earlier version of this post he pointed me to 5 ways a game can go on forever, not counting the example I have later in this post. Here is where Lance found the question and answer (look on the first page under Speed Department for the question, and the very end of the second page for the answer). I also did my own writuep with more details, see here.  Also of interest (though not if you were actually at the game this happened), the record for number of times a player has a foul with 2 strikes is 16, see here

 However, I came across an  example more obscure than any of those. 

Here is what happened (and you cans see the video of it here, though it really starts about a minute into it. Keep reading- it looks like its another post, but its part of this post: 

From your Digest

Back in 2008, the Yankees drafted a pitcher named Pat Venditte. What made Venditte unusual is that he can throw with both hands. In other words, he’s a switch pitcher. When he was drafted, he was assigned to the Staten Island Yankees, a low A ball team.

In his first game (against the Mets farm team, the Brooklyn Cyclones), Venditte came in to pitch. After getting the first two batters out and giving up a single, he then faced Ralph Henriquez, was a switch hitter. What happened next resembled an Abbott and Costello comedy routine. Venditte would put the glove on one hand (he had a specially made glove that could be worn on either hand) and Henriquez would then step across the plate to bat from the other side. Venditte would then switch his glove hand again and Henriquez went back to the other side.

Eventually, after much discussion, the umpires ruled that Henriquez would have to choose a batting side first, before Venditte had to commit. Henriquez was mad and, after he struck out, he slammed the bat against the ground in frustration.

The umpires were, in essence, winging it, because there was no rule to cover the situation. Eventually, the higher ups in baseball did write a rule to cover the situation — the opposite of the umpires’ decision.

Monday, September 14, 2020

An interesting serendipitous number

 Last seek I blogged about two math problems of interest to me here.

One of them two people posted answers, which was great since I didn't know how to solve them and now I do. Yeah! I blogged about that here.

The other problem got no comments, so I suppose it was of interest to me but not others. I was interested in it because the story behind it is interesting, and the answer is interesting.

it is from the paper 

An interesting and serendipitous number by John Ewing and Ciprian Foias, which is a chapter in the wonderful book 

Finite vs Infinite: Contributions to an eternal dilemma

Here is the story, I paraphrase the article (I'll give pointers  later).

In the mid 1970's a student asked Ciprian about the following math-competition problem:

x(1)>0    x(n+1) =  (1 + (1/x(n)))^n. For which x(1) does x(n) --> infinity?

It turned out this was a misprint. The actual problem was

x(1)>0  x(n+1)=(1+(1/x(n))^{x(n)}. For which x(1) does x(n) --> infinity.

The actual math-comp problem  (with exp x(n)) is fairly easy (I leave it to you.) But this left the misprinted problem (with exp n).  Crispian proved that there is exactly ONE x(1) such that x(n)--> infinity. 

Its approx 1.187... and may be trans.

I find the story and the result interesting, but the proof is to long for a blog post.

I tried to find the article online and could not. A colleague found the following:

A preview of the start of the article here

Wikipedia Page on the that number, called the Foias constant, here

Mathworld page on that number here

Most of the article but skips two pages here

Thursday, September 10, 2020

When are both x^2+3y and y^2+3x both squares, and a more general question

 In my last post (see here) I asked two math questions. In this post I discuss one of them. (I will discuss the other one later, probably Monday Sept 14.)

For which positive naturals x,y are x^2+3y and y^2+3x both squares?

I found this in a math contest book and could not solve it, so I posted it to see what my readers would come up with. They came up with two solutions, which you can either read in the comments on that post OR read my write up here.)

The problem raises two more general questions

1) I had grad student Daniel Smolyak write a program that showed that if  1\le x,y \le 1000 then the only solutions were (1,1) and (11,16) and (16,11).  (See write up for why the program did not have to look like anything close to all possibly (x,y).)  

Is there some way to prove that if the only solutions for 1\le x,y\le N (some N) are the three given above, then there are no other solutions?

2) Is the following problem solvable: Given p,q in Z[x,y] determine if the number of a,b such that both p(a,b) and q(a,b) are squares is finite or infinite.  AND if finite then determine how many, or a bound on how many.

Can replace squares with other sets, but lets keep it simple for now. 

Sunday, September 06, 2020

Two Math Problems of interest (at least to me)

 I will give two math problems that are of interest to me.

These are not new problems, however you will have more fun if you work on them yourself and leave comments on what you find. So if you want to work on it without hints, don't read the comments.

I will post about the answers (not sure I will post THE answers) on Thursday.

1) Let x(1)>0. Let x(n+1) = (  1 + (1/x(n))  )^n. 

For how many values of x(1) does this sequence go to infinity?

2) Find all (x,y) \in N \times N such that x^2+3y and y^2+3x are both squares. 

Tuesday, September 01, 2020

A well known theorem that has not been written down- so I wrote it down- CLIQ is #P-complete

(The two proofs that CLIQ is #P-complete that I discuss in this blog are written up by Lance and myself and are here. I think both are well known but I have not been able to find a writeup,so Lance and I did one.)

 I have been looking at #P (see my last blog on it it, here, for a refresher on this topic) since I will be teaching this topic in Spring 2021.  Recall

#SAT(\phi) = the number of satisfying assignments for phi

#CLIQ((G,k))= the number of cliques of size \ge k of G

#SAT is #P-complete by a cursory look at the proof of the Cook-Levin theorem.

A function is #P-complete if everything else in #P is Turing-Poly red to it. To show for some set A 

in NP, #A is #P-complete one usually uses pars reductions. 

I wanted a proof that #CLIQ is #P-complete, so I wanted a parsimonious reduction from SAT to CLIQ (Thats a reduction f: SAT--> CLIQ such that the number of satisfying assignments of phi equals the number of cliques of size \ge k of G.)

I was sure there is such a reduction and that it was well known and would be on the web someplace. So I tried to find it.


I tracked some references to a paper by Janos Simon (see here)  where he claims that the reduction of SAT to CLIQ  by Karp (see here) was pars.  I had already considered that and decided that Karps reduction was NOT pars.  I looked at both Simon's paper and Karp's paper to make sure I wasn't missing something (e.g., I misunderstood what Simon Says or what Karp ... darn, I can't think of anything as good as `Simon Says'). It seemed to me that Simon was incorrect. If I am wrong let me know.


Someone told me that it was in Valiant's  paper (see here). Nope. Valiant's paper shows that counting the number of maximal cliques is #P-complete. Same Deal Here- if I am wrong let me know. One can modify Valiant's argument to get #CLIQ #P-complete, and I do so in the write up. The proof is a string of reductions and starts with PERM is #P-complete. This does prove that# CLIQ is  #P-complete, but is rather complicated. Also, the reduction is not pars.

Lance told me an easy pars reduction that is similar to Karp's non-pars reduction, but it really is NOT Karp's reduction. Its in my write up. I think it is well known since I recall hearing it about 10 years ago. From Lance. Hmmm, maybe its just well known to Lance.

But here is my question: I am surprised I didn't find it on the web. If someone can point to a place on the web where it is, please let me know. In any case, its on the web NOW (my write up) so hopefully in the future someone else looking for it can find it.

Sunday, August 23, 2020

Sharp P and the issue of `natural problems'

 #P was defined by Valiant as a way to pin down that the PERMANENT of a matrix is hard to compute.

The definition I give is equivalent to the one Valiant gave.

g is in #P if there exists p a poly and B in P such that

g(x) = | { y : |y| = p(|x|) and (x,y) \in B } |

A function f is #P-complete if g is in #P and for all g in #P,  f is poly-Turing reducible to g.

#SAT is the function that, given a formula, returns the number of satisfying assignments. It is #P-complete by looking at the proof the Cook-Levin Theorem. The reduction of f to #SAT only makes one query to #SAT. A common way to show that #A is #P-complete is to show that SAT \le A with a reduction that preserves the number of solutions. 

Valiant proved that PERM was #P-complete (his reduction only used 1 call to PERM).

There are problems in P whose #-version is #-P complete: Matching and DNF-SAT are two of them.

Notice that I defined #SAT directly, not in terms of a poly p and a set B as above. Here is why: if you use poly p and set B one can do obnoxious things like: 

SAT = { phi : exists yz 2n-bits long such that phi(y)=T and z is prime }

The # version of this definition is not really what I want (though I am sure its #P-complete).

Valiant (see here and here) and Simon (see here) showed that  for many known NPC-problems A, #A is #P-complete. They meant NATURAL problems. Is it true for all natural NP-complete problems?

Unfortunately the statement `All NATURAL NPC problems give rise to #P-complete functions' is hard (impossible?) to state rigorously and hence hard (impossible?) to prove. 

1) Is there a natural A in NP such that #A is NOT #P-complete (under assumptions)?

2) Are there any theorems that show a large set of NPC problems have #P counterparts? Or are we doomed to, when we want to show some #A is #P-complete, come up with a new proof?

3) Can one PROVE there are NPC problems A such that #A is NOT #P-complete? (under assumptions).

Monday, August 17, 2020

Mathematics is not commutative

In my poll of P vs NP and other issues, one of the questions was

                             Is factoring in P?

One of the most interesting answers was

                             I don't really see why it shouldn't be. - Peter Shor

Recall that Peter Shor proved Factoring is in Quantum-P which lead to intense interest in Quantum Computing.

1) What if factoring was in P and this was shown before Shor's algorithm? Would Shor or someone else have ever proven factoring in quantum P? Would there be as much intense interest in quantum computing as there is now? Perhaps by physicists more than CS people?

2) What if factoring was in P and this was shown before RSA? Where would crypto be now? Zip drives with a googleplex random (or nearly random) bits and more 1-time pads? More lattice based crypto? Or RSA but with larger numbers? This may depend on how good the factoring algorithm is.

3) More generally, how much does the order of events matter for science?

a) If the Banach-Tarski paradox was discovered early on, would we have just tossed out the Axiom of Choice before so much more was build on it? Darling thinks we should toss out AC NOW because of Banach-Tarski.

b) In the model of set theory L you can do ALL of math except some parts of set theory and maybe a few other things (note quite: Harvey Friedman has found some combinatorial statements that need large cardinals to prove). Had L been discovered earlier then could we all now be working in L (except a few people who look at other models, but they are not in the mainstream)? We might know more about L and less about forcing. We would KNOW that AC and CH are true. Or we would think we know.

c) If  Engineers were the first ones to look at SAT and reductions, might they have been content to know that  if SAT \le A then A is probably hard? No need for the Cook-Levin Theorem! And then when someone proved Cook-Levin would the Engineers not really cares since they already knew SAT was hard?

d) I can imagine Ramsey's Theorem being discovered much later for some application, or perhaps never being discovered at all.

e) VDW's theorem has so few application, I can imagine it never being discovered. 

4) There are cases where if A was discovered before B then B has an easy proof, whereas if B was discovered before A, then B has a hard proof. I'll give one example:

Given HALT is undecidable, Godel's theorem is easy.

Assume HALT is undecidable. 

Let STAT(e) be the statement M_e(0) does not  halt.

There is some e such that M_e(0) does not halt  but ZFC cannot prove this.

PROOF: Assume, By Way of Contradiction that for all e such that M_e(0) does not halt,
ZFC could prove this. Then HALT is DECIDABLE:

Given e, run M_e(0) and at the same time enumerate all proofs in ZFC. It is guaranteed that
you will either find M_e(0) halts or a proof that M_e(0) does not halt. Hence you will,
in finite time, know if M_e(0) halts OR NOT.


Is the sequence of events where HALT is proven undecidable  before Godel's theorem plausible.
I  think so

I INVITE my readers to give there own examples of when Math is not commutative- meaning that
the order of events matters.

Thursday, August 13, 2020

Simons Institute Gets Another Decade

 Great news out of the Simons Institute.

The Simons Foundation has ensured a second decade of research and innovation for the Simons Institute for the Theory of Computing, based at UC Berkeley, through a $35.5 million grant. The grant, which will begin in 2022, after the conclusion of the Simons Institute's first 10 years, will support the Simons Institute's mission and activities through June 2032.

Congrats to Shafi Goldwasser and her team and of course a special thanks to Jim Simons and his foundation for their support for the theory community. 

Time flies. I remember when I was on Team Chicago in the final rounds back in 2011. We lost but the theory community won as Berkeley did the institute well with amazing collaborative spaces, thanks mainly I hear from Alistair Sinclair, and the strong programs and workshops organized by the many volunteers across the theory community. 

The institute started right when I started as a department chair so I never had the opportunity for the true Simons experience, joining for a semester-long program. When I did sneak away for a week at Simons I purposely avoided the workshops for Simons is at its best when strong researchers, connected by one of the programs, just talk, work and socialize together. I joined an amazing collection of complexity theorists to form a rather mediocre pub trivia team.

Even if you never make it there, the institute has a great collection of videos of its workshops, talks and celebrations. COVID-19 has driven Simons on-line but that just opens up their workshops and other events to a wider audience.

Congrats again to the Simons Institute for what it's given to the theory community and to its next dozen years with hopefully many more to follow!