Wednesday, December 27, 2017

Complexity Year in Review 2017

Theorem of the year goes to the resolution of the dichotomy conjecture. I wrote about the conjecture in February and while the Feder et. al paper didn't hold up, two later papers seem to resolve the conjecture.

A Proof of CSP Dichotomy Conjecture by Dmitriy Zhuk

I checked with experts in the field and at least one of these papers and more likely both ought to be correct.

Runners up include two matching papers I posted about last month, Svensson and Tarnawski who give a quasi-NC algorithm for general graph matching and Anari and Vazirani who give a NC algorithm for matching on planar graphs. We also had the nice quasi-polynomial time algorithm for parity games by Calude, Jain, Khoussainov, Li and Stephan that I posted on last March.

In last year's review we said "2016 will go down as a watershed year for machine learning" yet somehow it paled against 2017 with breakthroughs in chess, poker, astronomy not to mention continuous advances in machine translation, autonomous vehicles and everything else. Maybe next year ML can write the 2018 year in review.

We had an awesome eclipse to remind us of the wonders of the world and almost made me forget about US politics. Computing keeps growing and how do we find the resources to train people from pre-school through college and throughout their lives? How much should we worry about the dominance of a handful of computing companies? 



2018 is just full of questions: What will the Internet look like post-net neutrality? How will the new tax code play out? Where will Amazon put HQ2? What will machine learning do next? What can quantum computers with 50 qbits accomplish? Will bitcoin move to $300K or 30 cents? And what great advances in complexity await us?

Thursday, December 21, 2017

Having Faith in Complexity

I believe P ≠ NP as much as anyone else. Nevertheless should we worry about trust we put in complexity?

You don't need the full power of P = NP to break cryptography. I don't worry about quantum computers breaking RSA and related protocols. It won't sneak up on us--when (or if) quantum computing gets anywhere close to factoring large numbers, we'll figure out a strategy to change our protocols and to protect the information we already have. However if someone comes up with an algorithm tomorrow that cracks AES, we'll have a crisis on our hands as AES is so well used the algorithm is embedded into computer chips. Perhaps we can mitigate the damage before the algorithm spreads or at take our information off-line until we develop new solutions.

But what about blockchains, the technology that underlies cryptocurrencies such as Bitcoin. A blockchain consists of a series of transactions collected into sequence of blocks, where each block consists of a hash of the previous block with transactions themselves hashed and often encrypted with public-key cryptography. One would hope that breaking the cryptography would be caught quickly and we'd still have a legit record of transactions saved somewhere. The transactions themselves might be compromised especially if anonymity was built into the system.

Bitcoin itself, as I write this, has a total market cap of over $250 billion based fully on cryptography. The cryptography will probably hold up, Bitcoin investors have more to worry from bad implementations or the pop of a bubble of unrealistic expectations. But as we watch so many exciting advances in computing tackling challenges that we would never have expected to get solved, should we continue to build our economy on the hope that other advances won't happen?

Sunday, December 17, 2017

Monkey First!

The following story is not true nor has anyone claimed its true, but it has a point:

A company gets a contract to do the following: train a monkey to sit on a 10-foot pedestal and recite some passages of Shakespeare. After a week they announce they have made progress! They invite their investors to see what progress they have made! They unveil a curtain and there is... a 10-foot pedestal.

This story was in an article about how Google does moonshots-- that is, high-risk, high-reward, innovative work. The article is here. (How the Atlantic makes money when they have stuff online is a mystery to me. Perhaps they do in a very innovative way.)  The point is that its BAD to have tangible results (like the pedestal) that are not getting at the heart of the problem. So Google has various incentives to do the important stuff. Their slogan is MONKEY FIRST.

This also applies to our research.  The following sequence of events is common:

1) Prove some scattered results.

2) Pedastal or Monkey? You could write up what you have, polish it, write up some nice LaTeX macros to make the writing of the paper easier OR you could try to find the unifying principle that would be hard, and might not work, but if it works that would be, as the kids say, Jawesome (Jaw-dropping awesome). The sad answer is that which you do might depend on when the next conference deadline is.

More generally there is a tension between safe do-able research(Pedestal) and high-risk, high-reweard research (Monkey).  Is our incentive structure set up to encourage high-risk high-reward? The Tenure system is supposed to do it and it DOES in some cases, but not as much as it could since there are other factors (salary, promotion to full prof, grants).

Does the system encourage high-risk high-reward? Should it? Could we do better? What are your experiences? I have no answers (especially to the question of what are your experiences) so I welcome your comments.

Wednesday, December 13, 2017

Our AI future: The Good and the Ugly

I don’t directly work in machine learning but one cannot deny the progress it has made and the effect it has on society. Who would have thought even a few years ago that ML would have basically solved face and voice recognition and translate nearly as well as humans.

The Neural Information Process Systems conference held last week in Long Beach, California, sold out its 7500 registration slots in 12 days. NIPS, not long ago just another academic conference, has become a major machine learning job market where newly minted Ph.D.s earn north of $300,000 and top-ranked senior academics command multimillion-dollar, multiyear contracts."

AlphaZero, an offshoot of Google’s Go programs, learned chess given only the rules in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. Check out this match against Stockfish.



Just a trend that machine learning often works better when humans just get out of the way.

The advances in machine learning and automation have a dark side. Earlier this week I attended the CRA Summit on Technology and Jobs, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.

The most stark statistic: Contrary to what certain politicians may tell you, manufacturing output in the United States has never been higher, but manufacturing jobs have declined dramatically due to automation.


The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.

There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.

Tuesday, December 12, 2017

Interesting Probability on a VERY OLD TV show

I have posted about things I see in TV or Movies that are math or CS related:

Do TV shows overestimate how much a genius can help solve crimes or make really good crystal meth which seems to be blue. YES, see here

Do TV shows get math wrong. YES, see here and about 90% of the episodes of Numb3rs

Closer to home- do TV shows say stupid things about P vs NP. Elementary (one of the two Modern Day Sherlock Holmes shows did) does  see  here

Did Kirk and Spock really defeat a computer by a trick that wouldn't work now. Yes, see Lance's post on this here

Do TV shows use the word Quantum incorrectly? They do but they are not alone as such, see here

Do people writing Futrama get their math right! Yes- see here

Do people writing 24 get their math wrong! Yes- see here

Does the Big Bang Theory mostly get things right? Yes! - see here

There are more (Seinfeld things comedians should learn proofs! Really- see here) but I can make my point just with the ones above.

ALL of the TV shows except Star Trek were from after 2000 (or so).  So, with the exception of Science Fiction, math-refs and sci-refs in TV shows are relatively recent- I had thought.

Which is why I was surprised and delighted to see, an episode of the old western (anti-western? satire of a western?) Maverick, from 1958 (before I was born!), called Rope of Cards a CORRECT and INTERESTING  math reference.Maverick bets that a random 25 cards from a deck of cards can be arranged into five  5-card pat hands (I had to look that up-- hands where you don't want to discard any cards, so  flush, a straight, a full house would qualify. 4 of a kind would be pat if there were no wild cards).  The sucker takes the bet and loses. Maverick later says the odds are high and called the game Maverick Solitaire.And that is now the name of the puzzle- see here. The prob is around 0.98.

I call this a mention of math since it has to do with probability- which may be a stretch. And I doubt the scene would encourage people to go into math. But it might encourage one to learn probability either to sucker others or to not be suckered.

So the question now is- are there other non-science-fiction, refs to math in older TV shows?
I suspect yes - similar to the one above which is gambling and probability. What is the earliest mention of math on a TV show? The oldest that did not involve science fiction or gambling?


Thursday, December 07, 2017

Razor's Edge

Informally the sensitivity conjecture asks whether every hard Boolean function has a razor's edge input, where flipping a random bit has a reasonable chance of flipping the output.

Let's be more precise. We consider functions f mapping {0,1}n to {0,1}. For every input x, the decision tree complexity at x is the least number of bits of  x you would need to query to decide whether the function outputs 0 or 1. The decision tree complexity of a function is the maximum decision tree complexity over all possible x. Most interesting functions have high decision tree complexity, even the lowly OR function requires querying every bit on the input of all zeroes. The decision tree complexity is polynomially-equivalent to randomized-complexity, quantum complexity, certificate complexity, and the degree of a polynomial that computes the function exactly or approximately. The recent paper by Aaronson, Ben-David and Kothari gives a nice chart showing the relationship between these measures and references to the various papers giving the bounds.

The sensitivity of f on an input x is the number of bit locations i such that f(x)≠f(x⊕i) where x⊕i is x with the ith bit flipped. The sensitivity of f is the maximum sensitivity over all inputs. The sensitivity conjecture states that there is some ε>0 such that the sensitivity of f is at least mε if the decision tree complexity is at least m. If the conjecture were true then for any function with maximal decision tree complexity n (querying every input bit) there must be some razor's edge input x such that flipping a random bit of x has probability at least n of flipping the output.

I find it surprising that we have no proof or counterexample to this purely combinatorial question. There is a generalization of sensitivity known as block sensitivity which is the largest set of disjoint blocks where flipping the bits in any block flips the output bit. Block sensitivity is known to be polynomially related to decision tree complexity.

In a future post I'll talk about some approaches towards resolving this conjecture.

Monday, December 04, 2017

Fireside chat with Simons Inst Director Dick Karp



Above link is Samir Khuller interviewing Dick Karp, though its labelled as a fireside chat with Dick Karp.

Very interesting to hear how TCS has evolved. More generally its good to know where you've come from to have a better idea of where you're going.

bill g.

Thursday, November 30, 2017

Kolmogorov Complexity and the Primes

Bill's post on how to derive the non-finiteness of the primes from Van der Waerden's theorem reminds me of a nice proof using Kolmogorov complexity.

A quick primer: Fixed some universal programming language. Let C(x), the Kolmogorov complexity of x, be the length of the smallest program that outputs x. One can show by a simple counting argument for every n there is an x such that C(x) ≥ n. We call such x "random".

Suppose we had a finite list of primes p1…pk. Then any number m can be expressed as p1e1···pkek. Pick n large, a random x of length n and let m be the number x expresses in binary. We can compute m from e1,…,ek and a constant amount of other information, remembering that k is a constant. Each ei is at most log m and so we can describe all of them in O(log log m) bits and thus C(m) = O(log log m). But roughly C(m) = C(x)  ≥  n = log m, a contradiction.

But we can do better. Again pick n large, a random x of length n and let m be the number x expresses in binary. Let pi be the largest prime that divides m where pi is the ith prime. We can describe m by pi and m/pi, or by i and m/pi. So we have C(m) ≤ C(i,m/pi) ≤ C(i) + C(m/pi) + 2 log C(pi) ≤ log i + log m/pi + 2 log log i + c. The 2 log C(pi) term is needed to specify the separation between the program for i and the program for m/pi.

Since C(m) ≥ log m, we have
log m ≤ log i + log (m/pi)+ 2 log log i + c
log m ≤ log i + log m - log pi + 2 log log i + c
log pi ≤ log i + 2 log log i + c
pi ≤ O(i (log i)2)

The prime number theorem has pi approximately i log i, so we get just a log factor off from optimal with simple Kolmogorov complexity.

I wrote a short introduction to Kolmogorov complexity with this proof. I originally got the proof from the great text on Kolmogorov complexity from Li and Vitányi and they give credit to Piotr Berman and John Tromp.

Monday, November 27, 2017

Van der Waerden's theorem implies the infinitude of the primes


(Sam Buss and Denis Hirschfeld helped me on this post.)

I was reading the table of contents of the American Math Monthly and saw an article by Levent Alpoge entitled

Van der Waerden and the primes

in which he showed from VDW's theorem that the set of primes is infinite. The article is  here and here. My writeup of it is here.  Prof K saw me reading the paper.

 K: I see you are interested in proving the set of primes is infinite from VDW's theorem.

BILL: Yes, who wouldn't be!!!!

 K: Well, lots of people. Including me. Can't you just state VDW's theorem and then give the normal proof? Would that count? Besides, we already have an easy proof that the set of primes is  infinite without using VDW's theorem.

I turn K's comments  into a valid question:  What does it mean to prove A from B if A is already known?

 There are two issues here, informal and formal.

Informally:  If you look at the proof of VDW-->primes infinite the steps in that proof look easier than than the usual proof that the set of primes is infinite. And the proof is certainly different. If you read the paper you will see that I am certainly not smuggling in the usual proof. Also, the proof truly does use VDW's theorem.

Formally one could (and people working in Reverse Mathematics do similar things- see the books Subsystems of Second order Arithmetic by Simpson,, and  Slicing the Truth, reviewed here) devise a weak axiom system that itself cannot prove the set of Primes is Infinite, but can prove the implication VDW-->Primes infinite.  Note that Reverse Mathematics does this sort of thing, but for proofs involving infinite objects, nothing like what I am proposing here.

Open Problem 1: Find a proof system where the implication VDW-->Primes infinte can be proven, but primes infinite cannot. Sam Buss pointed out to me that for the weak system IΔ0 it is not known if it can prove the primes are infinite.

Open Problem 2: Find a proof system where you can do both proofs, but the prove of the implication is much shorter. Perhaps look at (VDW--> there are at least n primes) and (there are at least n primes)
and look at the length of proof as a function of n.

Open Problem 3: The statement there are no primes with  n bits, the with leading bit 1 can be expressed as a propositional statement. Get lower bounds on its refuation in (say) resolution. (A commenter pointed out an error in a prior version of this one so be wary- there may be an error here as well.)

I am suggesting work on the reverse mathematics of systems much weaker than RCA0. I do not know if this is a paper, a PhD thesis, a career, a dead end, or already pretty much done but I am not aware of it.


Monday, November 20, 2017

The Grad Student Tax

By now as you've read from Luca or Scott or PhD Comics or a variety of other sources on the dangerous changes to the tax code that passed the US House of Representatives last week. Among a number of university unfriendly policies, the tax code will eliminate the tax exemption for graduate student tuition for students supported with teaching or research duties, nearly every PhD student in STEM fields. The CRA, ACM, IEEE, AAAI, SIAM and Usenix put out a joint statement opposing this tax increase on graduate students. This is real.

Without other changes, a tax on tuition will make grad school unaffordable to most doctoral students. In computer science where potential PhD students can typically get lucrative jobs in industry, we'll certainly see a precipitous drop in those who choose to continue their studies. Universities will have to adjust by lower tuition, if finances and state law allows, and raising stipends. US government science funding will at best remain flat so in almost any scenario we'll see far fewer students pursue PhD degrees particularly in CS and STEM fields. Keep in mind we already don't come close to producing enough CS PhD students entering academia to meet the dramatically growing demand and these moves could frustrate faculty who also might head off to industry.

The current senate proposal leaves the exemption in place though no one can predict what will happen the the two bills get reconciled. In the best case scenario this bill goes the same way as the failed health care reform but republicans seem desperate to pass something major this fall. So reach out to your representatives, especially your senators, and express the need to leave in the exemption.

Thursday, November 16, 2017

A Tale of Three Rankings

In the Spring of 2018 the US News and World Report should release their latest rankings of US graduate science programs including computer science. These are the most cited of the deluge of computer science rankings we see out there. The US News rankings have a long history and since they are reputation based they roughly correspond to how we see CS departments though some argue that reputation changes slowly with the quality of a department.

US News and World Report also has a new global ranking of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?

75% of the global ranking is based on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.

The CRA rightly put out a statement urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.

I've never been a fan of using metrics to rank CS departments but there is a relatively new site, Emery Berger's Computer Science Rankings, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.

Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.

As Goodhart's Law states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.

Monday, November 13, 2017

Can you measure which pangrams are natural

A Pangram is a sentence that contains every letter of the alphabet

The classic is:

                                      The quick brown fox jumps over the lazy dog.

(NOTE- I had `jumped' but a reader pointed out that there was no s, and that `jumps' is the correct word)

which is only 31 letters.

I could give a pointer to lists of such, but you can do that yourself.

My concern is:

a) are there any pangrams that have actually been uttered NOT in the context of `here is a pangram'

b) are there any that really could.

That is- which pangrams are natural?  I know this is an ill defined question.

Here are some candidates for natural pangrams

1) Pack my box with five dozen liquor jugs

2) Amazingly few discotheques provide jukeboxes

3) Watch Jeopardy! Alex Trebek's fun TV quiz game

4) Cwm fjord bank glyphs vext quiz
(Okay, maybe that one is not natural as it uses archaic words. It means
``Carved symbols in a mountain hollow on the bank of an inlet irritated an
eccentric person'  Could come up in real life. NOT. It uses every letter
exactly once.)

How can you measure how natural they are?

For the Jeopardy one I've shown it to people and asked them

``What is unusual about this new slogan for the show Jeopardy?''

and nobody gets it. more important- they believe it is the new slogan.

So I leave to the reader:

I) Are the other NATURAL pangrams?

II) How would you test naturalness of such?

Pinning down `natural' is hard. I did a guest post in 2004 before I was an official co-blogger, about when a problem (a set for us) is natural, for example the set all regular expressions with squaring (see here).

Thursday, November 09, 2017

Advice for the Advisor

A soon-to-be professor asked me recently if I could share some ideas on on how to advise students. I started to write some notes only to realize that I had already posted on the topic in 2006.
Have students work on problems that interest them not just you. I like to hand them a proceedings of a recent conference and have them skim abstracts to find papers they enjoy. However if they stray too far from your research interests, you will have a hard time pushing them in the right directions. And don't work on their problems unless they want you to.
Keep your students motivated. Meet with them on a regular basis. Encourage students to discuss their problems and other research questions with other students and faculty. Do your best to keep their spirits high if they have trouble proving theorems or are not getting their papers into conferences. Once they lose interest in theory they won't succeed.
Feel free to have them read papers, do some refereeing and reviewing, give talks on recent great papers. These are good skills for them to learn. But don't abuse them too much.
Make sure they learn that selling their research is as important as proving the theorems. Have them write the papers and make them rewrite until the paper properly motivates the work. Make them give practice talks before conferences and do not hold back on the criticism.
Some students will want to talk about some personal issues they have. Listen as a friend and give some suggestions without being condescending. But if they have a serious emotional crisis, you are not trained for that; point them to your university counseling services.
Once it becomes clear a student won't succeed working with you, or won't succeed as a theorist or won't succeed in graduate work, cut them loose. The hardest thing to do as an advisor is to tell a student, particular one that tries hard, that they should go do something else. It's much easier to just keep them on until they get frustrated and quit, but you do no one any favors that way.
Computer science evolves dramatically but the basic principles of advising don't. This advise pretty much works now as well as it did in 2006, in the 80's when I was in the student or even the 18th century. Good advising never goes out of style.

Of course I don't and can't hand out a physical proceedings to a student to skim. Instead I point to on-line proceedings but browsing just doesn't have the same feel.

Looking back I would add some additional advice. Push your students and encourage them to take risks with their research. If they aren't failing to solve their problems, they need to try harder problems. We too often define success by having your paper accepted into a conference. Better to have an impact on what others do.

Finally remember that advising doesn't stop at the defense. It is very much a parent-child relationship that continues long after graduation. Your legacy as a researcher will eventually come to an end. Your legacy as an advisor will live on through those you advise and their students and so on to eternity.

Monday, November 06, 2017

The two fears about technology- one correct, one incorrect


When the luddites smashed loom machines their supporters (including Lord Byron, Ada Lovelaces father) made two arguments in favor of the luddites (I am sure I am simplifying what they said):

  1. These machines are tossing people out of work NOW and this is BAD for THOSE people. In this assertion they were clearly correct. (`lets just retrain them' only goes so far).
  2. This is bad for mankind! Machines displacing people will lead to the collapse of civilization! Mankind will be far worse off because of technology. In this assertion I think they were incorrect. That is, I think civilization is better off now because of technology. (If you disagree leave an intelligent polite comment. Realize that just be leaving a comment you are using technology. That is NOT a counterargument. I don't think its even IRONY. Not sure what it is.) 
  3. (This third one is mine and its more of a question) If you take the human element out of things then bad things will happen. There was a TV show where a drone was going to be dropped on something but a HUMAN noticed there were red flowers on the car and deduced it was a wedding so it wasn't dropped. Yeah! But I can equally well see the opposite: a computer program notices things that indicate its not the target that a person would have missed. But of course that would make as interesting a story. More to the point- if we allow on computers to make decisions without the human elemnet, is that good or bad? For grad admissions does it get rid of bias or does it reinforce bias? (See the book Weapons of Math Destruction for an intelligent argument against using programs for, say, grad admissions and other far more important things.)
I suspect that the attitude above greeted every technology innovation. For AI there is a similar theme but with one more twist: The machines will eventually destroy us! Bill Gates and Steven Hawkings have expressed views along these lines.

When Deep Blue beat Kasparov in chess there were some articles about how this could be the end of mankind. That's just stupid. For a more modern article on some of the dangers of AI (some reasonable some not) see this article on watson.

It seems to me that AI can do some WELL DEFINED (e.g., chess) very well, and even some not-quite-so-well-defined things (Nat Lang translation) very well, but the notion that they will evolve to be `really intelligent' (not sure that is well defined) and think they are better than us and destroy us seems like bad science fiction (or good science fiction).

Watson can answer questions very very well, Medical diagnosis machines may well be much better than doctors. While this may be bad news for Ken Jennings and for doctors, I don't see it being bad for humanity in the long term. Will we one day look at the fears of AI and see that they were silly--- the machines did not, terminator-style, turn against us? I think so. And of course I hope so.
  


Thursday, November 02, 2017

Matching and Complexity

Given a group of people, can you pair them up so that each pair are Facebook friends with each other? This is the famous perfect matching problem. The complexity of matching has a rich history which got a little richer in the past few months.

For bipartite graphs (consider only friendships between men and women), we have had fast matching algorithms since the 1950's via augmenting paths. In the 1965 classic paper, Path, Trees and Flowers, Jack Edmonds gives a polynomial-time algorithm for matching on general graphs. This paper also laid out an argument for polynomial-time as efficient computation that would lead to the complexity class P (of P v NP fame).

After Razborov showed that the clique problem didn't have polynomial-size monotone circuits, his proof techniques also showed that matching didn't have polynomial-size monotone circuits and Raz and Wigderson show that matching requires exponential size and linear depth. Because of Edmond's algorithm matching does have polynomial-size circuits in general. NOTs are very powerful.

Can one solve matching in parallel, say the class NC (Nick's class after Pippenger) of problems computable by a polynomial number of processors in polylogarithmic time? Karp, Upfal and Wigderson give a randomized NC algorithm for matching. Mulmuley, Vazirani and Vazirani prove an isolation lemma that allows a randomized reduction of matching to the determinant. Howard Karloff exhibited a Las Vegas parallel algorithm, i.e., never makes a mistake and runs in expected polylogarithmic time.

Can one remove the randomness? An NC algorithm for matching remains elusive but this year brought two nice results in that direction. Ola Svensson and Jakub Tarnawski give a quasi-NC algorithm for general graph matching. Quasi-NC means a quasipolynomial (2polylog) number of processors. Nima Anari and Vijay Vazirani give an NC algorithm for matching on planar graphs.

Matching is up there with primality, factoring, connectivity, graph isomorphism, satsfiability and the permanent as fixed algorithm problems that have played such a large role in helping us understand complexity. Thanks matching problem and may you find NC nirvana in the near future.

Tuesday, October 31, 2017

The k=1 case is FUN, the k=2 case is fun, the k\ge 3 case is... you decide.

 (All of the math in this post is in here.)


The following problem can be given as a FUN recreational problem to HS students or even younger: (I am sure that many of you already know it but my point is how to present it to HS students and perhaps even younger.)

Alice will say all  but ONE of the elements of {1,...,1010}in some order.

Bob listens with the goal of figuring out the number. Bob cannot possibly store 1010 numbers in his head. Help Bob out by giving him an algorithm which will not make his head explode.

This is an easy and fun puzzle. The answer is in the writeup  I point to above.

The following variant is a bit harder but a bright HS student could get it: Same problem except that Alice leaves out TWO numbers.

The following variant is prob more appropriate for a HS math competition than for a FUN gathering of HS students: Same problem except that Alice leaves out THREE numbers.

The following variant may be easier because its harder: Alice leaves out k numbers, k a constant. Might be easier then the k=3 case since the solver knows to NOT use properties of 3.

I find it interesting that the k=1, k=2, and k≥ 3 cases are on different levels of hardness.  I would like a more HS answer to the k≥ 3 case.

Thursday, October 26, 2017

2017 Fall Jobs Post

You're finishing up grad school or a postdoc and ask yourself what should I do for the rest of my life? We can't answer that for you but we can help you figure out your options in the annual fall jobs post. We focus mostly on the academic jobs. You could work in industry but there's nothing like choosing your own research directions and working directly with students and taking pride in their success.

For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. AcademicKeys also lists a number of CS jobs. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world and in particular many have postdoc positions to offer.

The computer science job market remains hot with most CS departments trying to hire multiple faculty. Many realize the importance of having a strong theory group, but it doesn't hurt if you can tie your research to priority areas like big data, machine learning and security.

Remember in your research statement, your talk and your interview you need to sell yourself as a computer scientist, not just a theorist. Show interest in other research areas and, especially in your 1-on-1 meetings, find potential ways to collaborate. Make the faculty in the department want you as a colleagues not just someone hiding out proving theorems.

Good luck to all on the market and can't wait for our Spring 2017 jobs post to see where you all end up.

Monday, October 23, 2017

Open: PROVE the pumping and reductions can't prove every non-reg lang non-reg.

Whenever I post on regular langs, whatever aspect I am looking at, I get a comment telling me that we should stop proving the pumping lemma (and often ask me to stop talking about it) and have our students prove things not regular by either the myhill-nerode theorem or by kolm complexity. I agree with these thoughts pedagogically but I am curious:

Is there a non-reg lang L such that you CANNOT prove L non-reg via pumping and reductions?

There are many pumping theorems (one of which is iff so you could use it on all non-reg but you wouldn't want to-- its in the paper pointed to later). I'll pick the most powerful Pumping Lemma that I can imagine teaching a class of ugrads:

If L is regular then there exists n0  such that for all w∈ L, |w| ≥ n0  and all prefixes x' of w

with |w|-|x'| ≥ n0  there exists x,y,z such that

|x| ≤ n0         

y is nonempty

w=x'xyz

for all i ≥ 0 x'xyiz   ∈ L

If this is all we could use then the question is silly: just take

{ w : number of a's NOT EQUAL number of b's }

which is not regular but satisfies the pumping lemma above. SO I also allow closure properties. I define (and this differs from my last post--- I thank my readers, some of whom emailed me, for help in clarifying the question)

A ≤ B

if there exists a function f such that if f(A) = B then A regular implies B regular

(e.g., f(A) = A ∩ a*b* )

(CORRECTION: Should be B Regular --> A regular. Paul Beame pointed this out in the comments.)

(CORRECTION- My definition does not work. I need something like what one of the commenters suggested and what I had in a prior blog post. Let CL be a closure function if for all A, if A is
regular than CL(A) is regular. Like f(A) = A cap a*b*.  Want a^nb^n \le numb a's = numb b's
via f(A) = A cap a*b*. So want A \le B if there is a closure function f with f(B) = A. )


A set B is Easily proven non-reg if either

a) B does not satisfy the pumping lemma, or

b) there exists a set A that does not satisfy the pumping lemma such that A ≤ B.

OPEN QUESTION (at least open for me, I am hoping someone out there knows the answer)

Is there a language that is not regular but NOT easily proven non-reg?




Ehrenfeucht, Parikh, Rozenberg in a paper Pumping Lemmas for Regular Sets (I could not find the official version but I found the Tech Report on line: here. Ask your grandparents what a Tech report is. Or see this post: here) from Lance about Tech Reports) proved an iff pumping lemma. They gave as their motivating example an uncountable number of languages that could not  be proved non-regular even with a rather fancy pumping lemma. But there lang CAN be easily proven non-reg. I describe that here. (This is the same paper that proves and iff Pumping Lemma. It uses Ramsey Theory so I should like it. Oh well.)

SO, I looked around for candidates for non-reg languages that could not be easily proven non-regular. The following were candidates but I unfortunately(?) found ways to prove them non-regular using PL and Closure (I found the ways by asking some bright undergraduates, to give credit- Aaron George did these.)

{ aibj : i and j are relatively prime }

{xxRw : x,w nonempty }  where R is Reverse.

I leave it to the reader to prove these are easily proven non-regular.

To re-iterate my original question: Find a non-reg lang that is not easily proven non-reg.

Side Question- my definition of reduction seems a bit odd in that I am defining it the way I want it to turn out. Could poly-Turing reduction have been defined as A ≤ B iff if A is in P then B is in P? Is that equivalent to the usual definition? Can I get a more natural definition for my regular reductions?




Thursday, October 19, 2017

The Amazon Gold Rush


Unless you have hidden under a rock, you've heard that Amazon wants to build a second headquarters in or near a large North American city. Amazon put out a nice old fashioned RFP.
Please provide an electronic copy and five (5) hard copies of your responses by October 19, 2017 to amazonhq2@amazon.com. Please send hard copies marked “confidential” between the dates of October 16th – 19th to ...
Hard copies? Just like the conference submissions of old. Key considerations for Amazon: A good site, local incentives, highly education labor pool and strong university system, near major highways and airports, cultural community fit and quality of life.

I've seen companies put subsidiaries in other cities, or moved their headquarters away from their manufacturing center, like when Boeing moved to Chicago. But building a second headquarters, "a full equal" to their Seattle campus, seems unprecedented for a company this size. Much like a company has only one CEO or colleges have one President, having two HQs questions where decisions get made. Amazon is not a typical company and maybe location means less these days.

Atlanta makes many short lists. We've got a burgeoning tech community, a growing city, sites with a direct train into the world's busiest airport, good weather, low cost of living and, of course, great universities. Check out the Techlanta and ChooseATL.

So am I using Amazon's announcement as an excuse to show off Atlanta? Maybe. But winning the Amazon HQ2 would be transformative to the city, not only in the jobs it would bring, but in immediately branding Atlanta as a new tech hub. Atlanta will continue to grow whether or not Amazon comes here but high profile wins never hurt.

Many other cities make their own claims on Amazon and I have no good way to judge this horse race (where's the prediction market?). Impossible to tell how Amazon weighs their criteria and it may come to which city offers the best incentives. Reminds me of the Simons Institute Competition announced in 2010 (Berkeley won) though with far larger consequences.

Monday, October 16, 2017

Reductions between formal languages


Let EQ = {w : number of a's = number of b's }

Let EQO = { anbn : n ∈  N} (so its Equal and in Order)

Typically we do the following:

Prove EQO is not regular by the pumping lemma.

Then to show EQ is not regular you say: If EQ was regular than EQ INTERSECT a*b*= EQO is regular, hence EQ is not regular (I know you can also show EQ with the Pumping Lemma but thats not important now.)

One can view this as a reduction:

A  ≤  B

If one can take B, apply a finite sequence of closure operations (e.g., intersect with a regular lang,
complement, replace all a with aba, replace all a with e (empty string), ) and get A.

If A is not regular and A≤ B then B is not regular.

Note that

EQO ≤ EQ ≤ EQ

Since EQO is not regular (by pumping ) we get EQ and \overline{EQ} are not regular.

Hence we could view the theory of showing things not-reg like the theory of NP completeness
with reductions and such. However, I have never seen a chain of more than length 2.

BUT- consider the following! Instead of using Pumping Lemma we use Comm. Comp. I have
been able to show (and this was well known) that

EQ is not regular by using Comm. Comp:

EQH = { (x,y) : |x|=|y| and number of a's in xy = number of b's in xy }

Comm Complexity of EQH is known to be log n  + \Theta(1). Important- NOT O(1).

If EQ is regular then Alice and Bob have an O(1) protocol: Alice runs x through the DFA and
transmits to Bob the state, then Bob runs y from that state to the end and transmits 1 if ended up in an accept state, 0 if not.

But I was not able to show EQO is not regular using Comm Complexity. SO imagine a bizzaro world where I taught my students the Comm Comp approach but not the Pumping Lemma. Could they prove that EQO is not regular. For one thing, could they prove

EQO ≤ EQ  ?

Or show that this CANNOT be done.

Anyone know?

One could also study the structure of the degrees induced by the equiv classes.
If this has already been done, let me know in the comments.






Thursday, October 12, 2017

Lessons from the Nobel Prizes

We've had a big week of awards with the Nobel Prizes and the MacArthur "Genius" Fellows. The MacArthur Fellows include two computer scientists, Regina Barzilay and Stefan Savage, and a statistician Emmanuel Candès but no theoretical computer scientists this year.

No computer scientists among the Nobel Laureates either but technology played a large role in the chemistry and physics prize. The chemistry prize went for a fancy microscope that could determine biomolecular structure. The LIGO project that measures extremely weak gravitational waves received the physics prize.

In a sign of the times, Jeffrey Hall, one of the medical prize recipients, left science due to lack of funding.

The economics prize went to Richard Thaler who described how people act irrationally but often in predictable ways such as the endowment effect that states the people give more value to an object they own versus one they don't currently have. The book Thinking Fast and Slow by 2002 Laureate Daniel Kahneman does a great job describing these behaviors.

While at Northwestern I regularly attended the micro-economics seminars many of which tried to give models that described the seemingly irrational behaviors that researchers like Thaler brought to light. My personal theory: Humans evolved to have these behaviors because while they might not be the best individual choices they make society better overall.

Monday, October 09, 2017

Michael Cohen

When I first saw posts about Michael Cohen (see here, here, here) I wondered

is that the same Michael Cohen who I knew as a HS student?

It is.  I share one memory.

Michael Cohen's father is Tom Cohen, a physics professor at UMCP.  They were going to a Blair High School Science fair and I got a ride to it (I had some students presenting at it.) In the car with Tom and Michael, Michael began telling is dad that his dad's proofs were not rigorous enough. I was touched by the notion that father and son could even have such a conversation.

Were Tom's proofs rigorous? I suspect that for Physics they were. But the fact that Michael could, as a high school student, read his dad's paper and have an opinion on it, very impressive. And very nice.

Michael was brilliant. It's a terrible loss.

Thursday, October 05, 2017

Is the Textbook Market doomed?

STORY ONE:
I always tell my class that its OKAY if they don't have the latest edition of the textbook, and if they can find it a  cheap, an earlier edition (often on Amazon, sometimes on e-bay), that's fine.  A while back at the beginning of a semester I was curious if the book really did have many cheap editions so I typed in the books name.

I found a free pdf copy as the fourth hit.

This was NOT on some corner of the dark web. This was easy to find and free. There were a few things not quite right about it, but it was clearly fine to use for the class. I wanted to post this information on the class website but my co-teacher was worried we might get in trouble for it, and he pointed out that the students almost surely already know, so we didn't. (I am sure thats correct. When I've discussed this issue with people, they are surprised I didn't already know that textbooks are commonly on the web, easy to find.)


STORY TWO:
I know someone who is thinking of writing a cheap text for a CS course. It will only be $40.00. That is much cheaper than the cost of a current edition of whats out there, and competitive with the used editions, but of course much more expensive than free. I think once students start getting used to free textbooks, even $40.00 is a lot.

STORY THREE (What I do): For discrete math we had slides on line, videos of the lectures on line, and some notes on line. For smaller classes I have my own notes on line. The more I teach a course the better then notes get as I correct them, polish them, etc, every time I teach.  Even so, the notes are very good if you've gone to class but not very good if you haven't (that is not intentional- is more a matter of, say, my notes not having actual pictures of DFA's and NFA's).  I have NO desire to polish the notes more and make a book out of them.  Why do some people have that urge? I can think of two reason though I am sure there are more: (1) To make money. If you get a text out early in a field then this could work (I suspect CLR algorithms text made money). I wonder if Calc I books still make money given how many there are. But in any case, this motivation is now gone--- which is one of the points of this post.  (2) You feel that your way to present (say) discrete math is SO good that others should use it also!  But now you can just post a book or notes on the web, do a presentation at SIGCSE or other comp-ed venues. You don't need to write a textbook. (Personally I think this is a bit odd anyway--- people should have their own vision for a course. Borrowing someone else's seems strange to me.)

DEATH SPIRAL: Books cost a lot, so students buy them cheap or get free downloads, so the companies does not make money so they raise the price of the book, so students buy them cheap...(I"m not going to get into whose fault this is or who started it, I'm just saying that this is where we are now.)


With books either cheap-used or free, how will the textbook market survive? Or will it? Asking around I got the following answers

1) There will always be freshman who don't know that books can be cheap or free. This might help with Calc I and other first-year courses, but not beyond that.

2) There will always be teachers who insist the students buy the latest edition so that they can assign problems easier, e.g., `HW is page 103, problems 1,3,8  and page 105 problems 19 and 20. This will help the textbook publishers in that window between the new edition coming out and the book being scanned in. Is that a long window?

3) Some textbooks now come with added gizmos- codes on the web to get some stuff. For the teachers there may be online quizzes. Unfortunately this makes the books cost even more. I personally never found such things useful, but others might.

4) If a student has a scholarship that pays for books, and the students buys the books used on amazon, can the scholarship still pay for them? I ask non-rhetorically. Even if the answer is no, so the student has to buy books at (say) the campus book store (will they still sell books in 10 years?) this is not enough to save the market.

5) Rent-a-books. I've seen these services. But they still cost too much.

6) e-books. If e-books catch on  then that might get rid of the used-book market. And if they are cheap enough that might help. But the flip side- once e-books are out there  it might be even easier to find a free copy online someplace. (Side note- Many people tell me that math books just don't work as e-books.... yet.)

7) The basic problem is cost. Is there a way for publishers to keep costs down? Or is even that too late as students get used to free or free-ish books?

So I ask again, non-rhetorically- is the textbook market doomed?


Sunday, October 01, 2017

Monty Hall (1921-2017) and His Problem

Monty Hall passed away yesterday, best known for co-creating and hosting the game show Let's Make a Deal, a show I occasionally watched as a kid. To the best of my knowledge he's never proven a theorem so why does he deserve mention in this blog?

For that we turn back the clock to 1990 when I was a young assistant professor at Chicago, more than a decade before this blog started, even before the world-wide web. The Chicago Tribune was a pretty good newspaper in those days before Craigslist. Nevertheless, the Sunday Tribune, as well as many other papers across the country, included Parade, a pretty fluffy magazine. Parade had (and still has) a column "Ask Marilyn" written by Marilyn vos Savant, who does not hide the fact that she had the world's highest IQ according the record books in the 1980's.

In 1990, vos Savant answered the following question in her column. Think about the answer if you haven't seen it before.
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
This is the kind of deal Monty Hall might have made on his show and so his name got attached to the problem in a 1976 paper in the American Statistician. Marilyn vos Savant claimed it was an advantage to switch. Many mathematicians at the time wrote into Parade arguing this was wrong--either way you have a 50% chance of winning. Even several of my fellow colleagues initially believed it made no difference to switch. Who was this low-brow magazine columnist to say otherwise? In fact, Marilyn was right.

Here is my simple explanation: If you make the commitment to switch, you will win if you pick a goat in the first round, a 2/3 chance of happening. Thinking it makes no difference is a fallacy in conditional probability, not unlike Mossel's Dice Paradox.

Monty Hall himself ran an experiment in his home in 1991 to verify that Marilyn was correct, though modulo the assumption that the host would always offer to make the switch and that everything was chosen uniformly.

Thanks to Bill Gasarch and Evan Golub for some useful details and links. Bill says "history being history, Monty Hall will be remembered as a great mathematician working in Probability." Maybe not, but it does get him remembered in the computational complexity blog.

Thursday, September 28, 2017

Tragic Losses

I'd like to remember two young people who's lives were taken way too early. I didn't know either well but both played large roles in two different communities.

Michael Cohen
Michael Cohen, a young researcher in theoretical computer science, passed away. He's had a number of great algorithmic results most notably his solely authored paper giving a polynomial-time algorithm to construct Ramanujan graphs. Luca Trevisan and MSR give their remembrances. Update (10/5): See also Scott Aaronson, which include comments form Michael's mother and father and a Daily Cal article.

Scout Schultz, in a story that made national news, studied computer engineering at Georgia Tech. On Saturday September 16th Scout was shot by a member of the Georgia Tech campus police. A vigil was held the following Monday, quite peaceful until a splinter group (mostly not Georgia Tech students) broke off, marched to the Georgia Tech police department and set a police car on fire. 

Scout Schultz
The death and its aftermath have shooken us all up at Georgia Tech. What has impressed me during this times is the strength of the Georgia Tech student body. Instead of focusing on blame, they have come together to remember Scout, a leader of the LGBQT community on campus. Being in a liberal city in a conservative state, the politics of the student body is quite mixed, but it doesn't divide the students, rather it brings them together. There's hope yet.

Sunday, September 24, 2017

Science fiction viewers used to embrace diversity (or did they) and now they don't (or do they)

(This post is inspired by the choice of a female to be the next Doctor on the TV show Dr. Who. Note that you can't say `the next Dr. Who will be female' since Dr. Who is not the name of the character. The name has not been revealed. Trivia: The first Dr. Who episode was the same day Kennedy was shot.)


I give a contrast and then say why it might not be valid:

Star Trek- The Original Series. 1966. There is a black female communications officer, a Russian officer and an Asian officer. And Science Fiction Viewers EMBRACED and APPROVED of this (for the time) diversity.

Modern Time: A black Storm Trooper in Star Wars VII (see here), a black Jimmy Olsen  in Supergirl (see here), female Ghostbusters (see here), a female Doctor on Dr. Who (see here and here) , and even the diversity of ST-Discovery (see here) have upset science fiction viewers.

So what happened in 50 year?

Now I say why this contrast might not be valid.  All items here are speculative, I welcome comments that disagree intelligently. Or agree intelligently. Or raise points about the issue.

1) Science Fiction fans aren't racists and anti-women, they just don't like change. Star Trek: The Original Series didn't have an original cannon to violate. Having a black Captain (ST:DSN) or a female captain (ST:VOY) was a matter of NEW characters and I don't recall any objections. (Were there objections?) If in  the ST reboot they made Captain Kirk black, I suspect there would be objections which the objectors would claim are not racist. Would they be?

2) While the fans that are upset get lots of coverage, they might be the minority. I sometimes see more stuff on the web arguing against the racism then the racism itself.  (A friend of mine in South Carolina told me that whenever a Confederate monument is about to be taken down the SAME 12 people show up to protest but get lots of coverage).

3) Science Fiction has gotten much more mainstream, so the notion that `science fiction viewers now do BLAH' is rather odd since its no longer a small community.

4) In 1966 there was no internet (not even in the Star Trek Universe!!) for fans and/or racists to vent their anger.

5) Some of the objections have valid counterparts: "I don't mind Jimmy Olsen being black, I mind him being so handsome, whereas in the Superman Cannon he is not." (Counter: some of the objections are repulsive:; "I don't mind Jimmy Olsen being black, I mind him being a love interest for Supergirl". Gee why is that?)

5a) Another `valid' one `storm troopers were all cloned from ONE white guy so there cannot be a black stormtrooper'.  Racism hiding behind  nitpicking? Actual nitpicking?

6) I give the fans back in 1966 too much credit- it was the showrunners who embraced diversity. The fans-- did they care?

6a) I give the showrunners to much credit. ALL Klingons are war-like, ALL Romulans are arrogant, ALL Vulcans are logical (except during Pon Farr), In the more recent shows like ST-TNG ALL  Ferengi are greedy. So the show accepts that stereotypes can be true.

6b) Women were not portrayed that well in the star trek universe, even in the more recent shows.  See 15 real terrible moments for women on Star Trek

7) Even the 1966 ST  was not as diverse as I make it out to be. I doubt it would pass the Bechdel test


two other points of interest

1) In the 1960's Science Fiction was sometimes  used as a way to talk about current issues since talking about them directly would not have been allowed. We can't really talk about real racism in a TV show so we'll have an alien race where they are all half-black, half-white, but differs on how which side (see here). And now?  Racism, sexism, homophobia can all be talked about freely. Hence other media has moved ahead of Science Fiction for diversity.

2) Also of interest, though not science fiction: The Edward Albee estate blocks a production of Who's afraid of Virginia Woolf that was going to cast a black man as Nick (a supporting character- George is the main male character).  See here.  Why the block? Because that is what Albee (who is now dead) requested. What would he think now? Who knows?

Thursday, September 21, 2017

Acronyms and PHP

Whenever I teach discrete math and use FML to mean Formula the students laugh since its a common acroynm for  Fuck My Life. Now they laugh, and I say I know why you are laughing, I know what it means  and they laugh even harder.

BUT it got me thinking: Pigeonhole Principle! There are more things we want short acroynms for then there are short acroynms. Below are some I thought of. I am sure there are others, in fact I am sure there are websites of such, but I wanted to see which ones I just happen to know.

AMS- American Math Society and much much more:see here

DOA-

Dead on Arrival

Department of Aging. Scary!

ERA-

Earned Run Average in Baseball,

Equal Rights Amendment in politics

PCP-

Phencyclidine, a drug that you should never take.

Prob. Checkable Proofs. Obscure to the public but not to us.

ADDED LATER: A reader noted Post Correspondence Problem, a good example of a natural undecidable problem.

IRA- 

Irish Republic Army

Internal Retirement Account

Several companies have had rumors they fund terrorism because they were giving their employees IRA's. The headline `Company X funds IRA's' could be misunderstood.


SAT-

Standard Aptitute Test

Satisfiability (of Boolean Formulas) Obscure to the public but not to us. Actually it may get less obscure as more ``proofs'' resolving P vs NP come out.

SJW

Single Jewish Female (in classified ads- more on that later). I think SJF is more common.

Social Justice Warrior (sounds like a good thing but might not be)

Classified ads are a source of many acronyms which can be used to teach combinatorics.

{S,M,W,D,G}{B,C,H,J,W}{M,F}


S-single, M-married, W-widowed, D-Divorced, G-Gay (this one I've seen alone making me wonder
about S/M/W/D? I've also seen four-letter acronyms to disambiguate).

B- black, C-Christian, H-Hispanic,  J-Jewish, W-White.

M,F- Male, Female, though I am sure there are ways to say other genders.

Great for combinatorics! especially if you add in other ones (like BD)

WTF-

Wisconsin  Tourism Federation

You know what else it means so I won't say it (this is a G-rated blog). When I first saw it I thought `what the fuck?- how could they have screwed up so badly?'


TEACHING TOOL- when teaching PHP (Pigeon hole Principle, not the language PHP which stands for Hypertex PreProcessing, not quite in order, or Personal Home Page) you can use the the fact that

number of concepts GREATER THAN  number of 3-letter combos

leads to some 3-letter combos will be used more than once.

Sunday, September 17, 2017

A problem I thought was interesting- now...

On Nate Silver's page he sometimes (might be once a week) has a column edited by Oliver Roeder of problems. Pretty much math problems though sometimes not quite.

Some are math problems that I have seen before (e.g., hat problems). I don't bother submitting since that would just be goofy. I would be  ringer.

Some are math problems that I have not seen before, I try to do, I fail, but read the answer and am enlightened. I call that a win.

But some are math problems that I have not seen before, I try to do, I fail, but when I see the solution its a computer simulation or something else that isn't quite as interesting as I had hoped.

I describe one of those now; however, I ask if it can be made more interesting.

The problems is from this column: here

I paraphrase:  Let A be the numbers {1,2,3,...,100}.  A sequence is nice if  (1) it begins with any number in A, (2) every number is from A and is either a factor of multiple of the number just before it, and (3) no number can appear more than once.  Find the LONGEST nice sequence

Example of a nice sequence: 

4, 12, 24, 6, 60, 30, 10, 100, 25, 5, 1, 97

I worked on it

1) By hand I came up with a nice sequence of length 42. This was FUN! You can either have fun trying to find a long nice sequence or you can look at mine here.

2) I tried to prove that it was optimal, hoping that either I would find its optimal or be guided to a longer sequence. Neither happened. More important is that this was NOT FUN.

3) I looked forward to the solution that would be in the next column and would be enlightening. 

4) The next column, which did have the solution, is here! The answer was a sequence of length 77 found by a program that also verified there was no longer sequence. The sequence itself was mildly enlightening in that I found some tricks I didn't know about, but the lack of a real lower bound proof was disappointing.

They mentioned that this is a longest path problem (Graph is {1,..,100} edges are between numbers that are either multiples of factors) and that such problems are NP-complete. That gave the impression that THIS problem is hard since its a case of an NP-complete problem. Thats not quite right- its possible that this type of graph has a quick solution.

But I would like YOU the readers to help me turn lemon into lemonade.

1) Is there a short proof that 77 is optimal?  Is there a short proof that (say) there is no sequence of length 83.  I picked 83 at random.  One can easily prove there is no sequence of length 100.

2) Is the following problem in P or NPC or if-NPC-then-bad-thing-happen:

Given (n,k) is there a nice sequence of {1,...,n} of length at least k. (n is in binary, k is in unary, so that the problem is in NP.)

I suspect not NPC.

3) Is the following problem in P or NPC or ...

Given a set of numbers A and a number k, is there a nice sequence of elements of A of length at least k (k in unary).

Might be NPC if one can code any graph into such a set.

Might be in P since the input has a long length.

4) Is the following solvable: given a puzzle in the Riddler, determine ahead of time if its going to be interesting? Clyde Kruskal and I have a way to solve this- every even numbered column I read the problem and the solution and tell him if its interesting, and he does the same for odd number columns.


Thursday, September 14, 2017

Random Storm Thoughts

It's Monday as I write this post from home. Atlanta, for the first time ever, is in a tropical storm warning. Georgia Tech is closed today and tomorrow. I'm just waiting for the power to go out. But whatever will happen here won't even count as a minor inconvenience compared to those in Houston, the Caribbean and Florida. Our hearts goes out to all those affected by these terrible storms.

Did global warming help make Harvey and Irma as dangerous as they became? Hard to believe we have an administration that won't even consider the question and keeps busy eliminating "climate change" from research papers. Here's a lengthy list cataloging Trump's war on science. 

Tesla temporarily upgraded to its Florida Owners' cars giving them an extra 30 miles of battery life. Glad they did this but it begs the question why Tesla restricted the battery life in the first place. Reminds of when in the 1970's you wanted a faster IBM computer, you paid more and an IBM technician would come and turn the appropriate screw. Competition prevents software-inhibitors to hardware. Who will be Tesla's competitors?

During all this turmoil the follow question by Elchanan Mossel had me oddly obsessed: Suppose you flip a six-sided die. What is the expected number of dice throws needed until you get a six given that all the throws ended up being even numbers? My intuition was wrong though when Tim Gowers falls into the same trap I don't feel so bad. I wrote a short Python program to convince me, and the program itself suggested a proof.

Updates on Thursday: I never did lose power though many other Georgia Tech faculty did. The New York Times also covered the Tesla update. 

Sunday, September 10, 2017

The Scarecrow's math being wrong was intentional

In 2009 I had a post about Movie mistakes (see here). One of them was the Scarecrow in The Wizard of Oz after he got a Diploma (AH- but not a brain) he said

The sum of the square roots of any two sides of an isoscles triangle is equal to the square root of the remaining side. Oh joy! Rapture! I have a brain!

I wrote that either this mistake was (1) a mistake, (2) on purpose and shows the Scarecrow really didn't gain any intelligence (or actually he was always smart, just not in math), or (3) It was Dorothy's dream so it Dorothy was not good at math.

Some of the comments claimed it was (2).  One of the comments said it was on the audio commentary.

We now have further proof AND a longer story: In the book Hollywood Science: The Next Generation, From Spaceships to Microchips (see here) they discuss the issue (page 90). The point to our blog as having discussed it (the first book not written by Lance or Lipton-Regan to mention our blog?) and then give evidence that YES it was intentional.

They got a hold of the original script. The Scarecrow originally had a longer even more incoherent speech that was so over the top that of course it was intentional. Here it is:

The sum of the square roots of any two sides of an isosceles triangle is equal to the square root of the remaining side: H-2-O Plus H-2-S-O-4 equals H-2-S-O-3 using pi-r-squared as a common denominator Oh rapture! What a brain!

While I am sure the point was that the Scarecrow was no smarter, I'm amused at the thought of Dorothy not knowing math or chemistry and jumbling them up in her dream.

Thursday, September 07, 2017

Statistics on my dead cat policy- is there a correlation?

When I teach a small (at most 40) students I often have the dead-cat policy for late HW:

HW is due on Tuesday. But there may be things that come up that don't quite merit a doctors note, for example your cat dying, but are legit for an extension. Rather than have me judge every case you ALL have an extension until Thursday, no questions asked. Realize of course that the hw is MORALLY due Tuesday. So if on Thursday you ask, for an extension I will deny it on the grounds that I already gave you one. So you are advised to not abuse the policy. For example, if you forget to bring your HW in on thursday I will not only NOT give the extension, but I will laugh at you.

(I thought I had blogged on this policy before but couldn't find the post.)

Policy PRO: Much less hassling with late HW and doctors notes and stuff

Policy CON: The students tend to THINK of Thursday as the due date.

Policy PRO: Every student did every HW.

Caveat: The students themselves tell me that they DO start the HW on Monday night, but if they can't quite finish it they have a few more days. This is OKAY by me.

I have always thought that there is NO correlation between the students who tend to hand in the HW on Thursday and those that do well in the class.  In the spring I had my TA keep track of this and do statistics on it.

The class was Formal Lang Theory (Reg Langs, P and NP, Computability. I also put in some communication complexity. I didn't do Context free grammars.)  There were 43 students in the class. We define a students morality (M) as the the number of HW they hand in on Tuesday. There were 9 HW.

3 students had M=0

12 students had M=1

9 students had M=2

5 students had M=3

4 students had M=4

4 students had M=5

1 student had M=6

1 student had M=7

2 students had M=8

2 students had M=9

We graphed grade vs morality (see  here)

The Pearson Correlation Coefficient is 0.51. So some linear

The p-value is 0.0003 which means the prob that there is NO correlation is very low.

My opinion:

1) The 5 students with M at least 7 all did very well in the course.This seems significant.

2) Aside from that there is not much correlation.

3) If I tell my next semesters class  ``people who handed the HW in on tuesday did well in the class so you should do the same'' that would not be quite right- do the good students hand thing in on time, or does handing things in on time make you a good student? I suspect the former.

4) Am I surprised that so many students had such low M scores? Not even a little.










Monday, September 04, 2017

Rules and Exceptions

As a mathematician nothing grates me more than the expression "The exception that proves the rule". Either we bake the exception into the rule (all primes are odd except two) or the exception in fact disproves the rule.

According to Wikipedia, "the exception that proves the rule" has a legitimate meaning, a sign that says "No parking 3-6 PM" suggests that parking is allowed at other times. Usually though I'm seeing the expression used when one tries to make a point and wants to dismiss evidence to the contrary. The argument says that if exceptions are rare that gives even more evidence that the rule is true. As in yesterday's New York Times
The illegal annexation of Crimea by Russian in 2014 might seem to prove us wrong. But the seizure of Crimea is the exception that proves the rule, precisely because of how rare conquests are today.
Another example might be the cold wave of 2014 which some say support the hypothesis of global warming because such cold waves are so rare these days.

How about the death of Joshua Brown, when his Tesla on autopilot crashed into a truck. Does this give evidence that self-driving cars are unsafe, or in fact they are quite safe because such deaths are quite rare? That's the main issue I have with "the exception that proves the rule", it allows two people to take the same fact to draw distinctly opposite conclusions.

Thursday, August 31, 2017

NOT So Powerful

Note: Thanks to Sasho and Badih Ghazi for pointing out that I had misread the Tardos paper. Approximating the Shannon graph capacity is an open problem. Grötschel, Lovász and Schrijver approximate a related function, the Lovász Theta function, which also has the properties we need to get an exponential separation of monotone and non-monotone circuits.

Also since I wrote this post, Norbert Blum has retracted his proof.

Below is the original post.



A monotone circuit has only AND and OR gates, no NOT gates. Monotone circuits can only produce monotone functions like clique or perfect matching, where adding an edge only makes a clique or matching more likely. Razborov in a famous 1985 paper showed that the clique problem does not have polynomial-size monotone circuits.

I choose Razborov's monotone bound for clique as one of my Favorite Ten Complexity Theorems (1985-1994 edition). In that section I wrote
Initially, many thought that perhaps we could extend these [monotone] techniques into the general case. Now it seems that Razborov's theorem says much more about the weakness of monotone models than about the hardness of NP problems. Razborov showed that matching also does not have polynomial-size monotone circuits. However, we know that matching does have a polynomial-time algorithm and thus polynomial-size nonmonotone circuits. Tardos exhibited a monotone problem that has an exponential gap between its monotone and nonmonotone circuit complexity. 
I have to confess I never actually read Éva Tardos' short paper at the time but since it serves as Exhibit A against Norbert Blum's recent P ≠ NP paper, I thought I would take a look. The paper relies on the notion of the Shannon graph capacity. If you have a k-letter alphabet you can express kn many words of length n. Suppose some pairs of letters were indistinguishable due to transmission issues. Consider an undirected graph G with edges between pairs of indistinguishable letters. The Shannon graph capacity is the value of c such that you can produce cn distinguishable words of length n for large n. The Shannon capacity of a 5-cycle turns out to be the square root of 5. Grötschel, Lovász, Schrijver use the ellipsoid method to approximate the Shannon capacity in polynomial time.

The Shannon capacity is anti-monotone, it can only decrease or stay the same if we add edges to G. If G has an independent set of size k you can get kn distinguishable words just by using the letters of the independent set. If G is a union of k cliques, then the Shannon capacity is k by choosing one representation from each clique, since all letters in a clique are indistinguishable from each other.

So we have the largest independent set is at most the Shannon capacity is at most the smallest clique cover.

Let G' be the complement of a graph G, i.e. {u,v} is an edge of G' iff {u,v} is not an edge of G. Tardos' insight is to look at the function f(G) = the Shannon capacity of G'. Now f is monotone in G. f(G) is at least the largest independent set of G' which is the same as the largest clique in G. Likewise f(G) is bounded above by the smallest partition into independent sets which is the same as the chromatic number of G since all the nodes with the same color form an independent set. We can only approximate f(G) but by careful rounding we can get a monotone polynomial-time computable function (and thus polynomial-size AND-OR-NOT circuits) that sits between the clique size and the chromatic number.

Finally Tardos notes that the techniques of Razborov and Alon-Boppana show that any monotone function that sits between clique and chromatic number must have exponential-size monotone (AND-OR) circuits. The NOT gate is truly powerful, bringing the complexity down exponentially.

Sunday, August 27, 2017

either pi is algebraic or some journals let in an incorrect paper!/the 15 most famous transcendental numbers


Someone has published three papers claiming that

π is 17 -8*sqrt(3) which is really =3.1435935394...

Someone else has published eight papers claiming

π is (14 - sqrt(2))/4 which is really 3.1464466094...

The first result is closer, though I don't think this is a contest that either author can win.

Either π is algebraic, which contradicts a well known theorem, or some journals accepted some papers with false proofs. I also wonder how someone could publish the same result 3 or 8 times.

I could write more on this, but another blogger has done a great job, so I'll point to it: here

DIFFERENT TOPIC (related?) What are the 15 most famous transcendental numbers? While its a matter of opinion, there is an awesome website that claims to have the answer here. I"ll briefly comment on them. Note that some of them are conjectured to be trans but have not been proven to be. So should be called 12 most famous trans numbers and 3 famous numbers conjectured to be trans. That is a bit long (and as short as it is only because I use `trans') so the original author is right to use the title used.

1) pi YEAH (This is probably the only number on the list such that a government tried to legally declare its value, see here for the full story.)

2) e YEAH

3) Eulers contant γ which is the limit of (sum_{i=1}^n  1/i) -  ln(n). I read a book on γ  (see here) which had interesting history and math in it, but not that much about γ . I'm not convinced the number is that interesting. Also, not known to be trans (the website does point this out)

4) Catalan's number  1- 1/9 + 1/25 - 1/49 + 1/81  ...  Not known to be trans but thought to be. I had never heard of it until reading the website so either (a) its not that famous, or (b) I am undereducated.

5) Liouville's number 0.110001... (1 at the 1st, 2nd, 6th, 24th, 120th, etc - all n!- place, 0's elsewhere)
This is a nice one since the proof that its trans is elementary. First number ever proven Trans. Proved by the man whose name is on the number.

6) Chaitian's constant which is the prob that a random TM will halt. See here for more rigor. Easy to show its not computable, which implies trans.  It IS famous.

7) Chapernowne's number which is 0.123456789 10 11 12 13 ... . Cute!

8) recall that ζ(2) = 1 + 1/4 + 1/9 + 1/6 + ... = π2/6

ζ(3) = 1 + 1/8 + 1/27 + 1/64 + ... known as Apery's constant, thought to be trans but not known.

It comes up in Physics and in the analysis of random minimal spanning trees, see here which may be why this sum is here rather than some other sums.

9) ln(2). Not sure why this is any more famous than ln(3) or other such numbers

10) 2sqrt(2) - In the year 1900 Hilbert proposed 23 problems for mathematicians to work on (see here for the problems, and see here  for a joint book review of two books about the problem, see  here for a 24th problem found in his notes much later). The 7th problem  was to show that ab is trans when a is rational and b is irrational (except in trivial cases). It was proven by Gelfond and refined by Schneider (see here). The number 2sqrt(2) is sometimes called Hilbert's Number. Not sure why its not called the Gelfond-Schneider number. Too many syllables?

11) eπ  Didn't know this one. Now I do!

12) πe (I had a post about comparing eπ to πe  here.)

13) Prouhet-Thue-Morse constant - see here

14) i^i. Delightful! Its real and trans! Is it easy to show that its real? I doubt its easy to show that its trans. Very few numbers are easy to show are trans, though its easy to show that most numbers are.

15) Feigenbaum's constant- see here

 Are there any Trans numbers of which you are quite fond that aren't on the list?

If you proof any of the above algebraic then you can probably publish it 3 or 8 or ii times!

 I can imagine a crank trying to show π or maybe even e is algebraic. ζ(3) or the Feigenbaum's constant, not so much.