Thursday, August 31, 2017

NOT So Powerful

Note: Thanks to Sasho and Badih Ghazi for pointing out that I had misread the Tardos paper. Approximating the Shannon graph capacity is an open problem. Grötschel, Lovász and Schrijver approximate a related function, the Lovász Theta function, which also has the properties we need to get an exponential separation of monotone and non-monotone circuits.

Also since I wrote this post, Norbert Blum has retracted his proof.

Below is the original post.



A monotone circuit has only AND and OR gates, no NOT gates. Monotone circuits can only produce monotone functions like clique or perfect matching, where adding an edge only makes a clique or matching more likely. Razborov in a famous 1985 paper showed that the clique problem does not have polynomial-size monotone circuits.

I choose Razborov's monotone bound for clique as one of my Favorite Ten Complexity Theorems (1985-1994 edition). In that section I wrote
Initially, many thought that perhaps we could extend these [monotone] techniques into the general case. Now it seems that Razborov's theorem says much more about the weakness of monotone models than about the hardness of NP problems. Razborov showed that matching also does not have polynomial-size monotone circuits. However, we know that matching does have a polynomial-time algorithm and thus polynomial-size nonmonotone circuits. Tardos exhibited a monotone problem that has an exponential gap between its monotone and nonmonotone circuit complexity. 
I have to confess I never actually read Éva Tardos' short paper at the time but since it serves as Exhibit A against Norbert Blum's recent P ≠ NP paper, I thought I would take a look. The paper relies on the notion of the Shannon graph capacity. If you have a k-letter alphabet you can express kn many words of length n. Suppose some pairs of letters were indistinguishable due to transmission issues. Consider an undirected graph G with edges between pairs of indistinguishable letters. The Shannon graph capacity is the value of c such that you can produce cn distinguishable words of length n for large n. The Shannon capacity of a 5-cycle turns out to be the square root of 5. Grötschel, Lovász, Schrijver use the ellipsoid method to approximate the Shannon capacity in polynomial time.

The Shannon capacity is anti-monotone, it can only decrease or stay the same if we add edges to G. If G has an independent set of size k you can get kn distinguishable words just by using the letters of the independent set. If G is a union of k cliques, then the Shannon capacity is k by choosing one representation from each clique, since all letters in a clique are indistinguishable from each other.

So we have the largest independent set is at most the Shannon capacity is at most the smallest clique cover.

Let G' be the complement of a graph G, i.e. {u,v} is an edge of G' iff {u,v} is not an edge of G. Tardos' insight is to look at the function f(G) = the Shannon capacity of G'. Now f is monotone in G. f(G) is at least the largest independent set of G' which is the same as the largest clique in G. Likewise f(G) is bounded above by the smallest partition into independent sets which is the same as the chromatic number of G since all the nodes with the same color form an independent set. We can only approximate f(G) but by careful rounding we can get a monotone polynomial-time computable function (and thus polynomial-size AND-OR-NOT circuits) that sits between the clique size and the chromatic number.

Finally Tardos notes that the techniques of Razborov and Alon-Boppana show that any monotone function that sits between clique and chromatic number must have exponential-size monotone (AND-OR) circuits. The NOT gate is truly powerful, bringing the complexity down exponentially.

Sunday, August 27, 2017

either pi is algebraic or some journals let in an incorrect paper!/the 15 most famous transcendental numbers


Someone has published three papers claiming that

π is 17 -8*sqrt(3) which is really =3.1435935394...

Someone else has published eight papers claiming

π is (14 - sqrt(2))/4 which is really 3.1464466094...

The first result is closer, though I don't think this is a contest that either author can win.

Either π is algebraic, which contradicts a well known theorem, or some journals accepted some papers with false proofs. I also wonder how someone could publish the same result 3 or 8 times.

I could write more on this, but another blogger has done a great job, so I'll point to it: here

DIFFERENT TOPIC (related?) What are the 15 most famous transcendental numbers? While its a matter of opinion, there is an awesome website that claims to have the answer here. I"ll briefly comment on them. Note that some of them are conjectured to be trans but have not been proven to be. So should be called 12 most famous trans numbers and 3 famous numbers conjectured to be trans. That is a bit long (and as short as it is only because I use `trans') so the original author is right to use the title used.

1) pi YEAH (This is probably the only number on the list such that a government tried to legally declare its value, see here for the full story.)

2) e YEAH

3) Eulers contant γ which is the limit of (sum_{i=1}^n  1/i) -  ln(n). I read a book on γ  (see here) which had interesting history and math in it, but not that much about γ . I'm not convinced the number is that interesting. Also, not known to be trans (the website does point this out)

4) Catalan's number  1- 1/9 + 1/25 - 1/49 + 1/81  ...  Not known to be trans but thought to be. I had never heard of it until reading the website so either (a) its not that famous, or (b) I am undereducated.

5) Liouville's number 0.110001... (1 at the 1st, 2nd, 6th, 24th, 120th, etc - all n!- place, 0's elsewhere)
This is a nice one since the proof that its trans is elementary. First number ever proven Trans. Proved by the man whose name is on the number.

6) Chaitian's constant which is the prob that a random TM will halt. See here for more rigor. Easy to show its not computable, which implies trans.  It IS famous.

7) Chapernowne's number which is 0.123456789 10 11 12 13 ... . Cute!

8) recall that ζ(2) = 1 + 1/4 + 1/9 + 1/6 + ... = π2/6

ζ(3) = 1 + 1/8 + 1/27 + 1/64 + ... known as Apery's constant, thought to be trans but not known.

It comes up in Physics and in the analysis of random minimal spanning trees, see here which may be why this sum is here rather than some other sums.

9) ln(2). Not sure why this is any more famous than ln(3) or other such numbers

10) 2sqrt(2) - In the year 1900 Hilbert proposed 23 problems for mathematicians to work on (see here for the problems, and see here  for a joint book review of two books about the problem, see  here for a 24th problem found in his notes much later). The 7th problem  was to show that ab is trans when a is rational and b is irrational (except in trivial cases). It was proven by Gelfond and refined by Schneider (see here). The number 2sqrt(2) is sometimes called Hilbert's Number. Not sure why its not called the Gelfond-Schneider number. Too many syllables?

11) eπ  Didn't know this one. Now I do!

12) πe (I had a post about comparing eπ to πe  here.)

13) Prouhet-Thue-Morse constant - see here

14) i^i. Delightful! Its real and trans! Is it easy to show that its real? I doubt its easy to show that its trans. Very few numbers are easy to show are trans, though its easy to show that most numbers are.

15) Feigenbaum's constant- see here

 Are there any Trans numbers of which you are quite fond that aren't on the list?

If you proof any of the above algebraic then you can probably publish it 3 or 8 or ii times!

 I can imagine a crank trying to show π or maybe even e is algebraic. ζ(3) or the Feigenbaum's constant, not so much.



Thursday, August 24, 2017

Kurtz-Fest

Stuart Kurtz turned 60 last October and his former students John Rogers and Stephen Fenner organized a celebration in his honor earlier this week at Fenner's institution, the University of South Carolina in Columbia.

Stuart has been part of the CS department at the University of Chicago since before they had a CS department and I knew Stuart well as a co-author, mentor, boss and friend during my 14+ years at Chicago. I would have attended this weekend no matter the location but a total eclipse a short drive from Atlanta (which merely had 97% coverage) certainly was a nice bonus.

Stuart Kurtz brought a logic background to computational complexity. He's played important roles in randomness, the structural properties of reductions, especially the Berman-Hartmanis isomorphism conjecture, relativization, counting complexity and logics of programs. I gave a talk about Stuart's work focusing on his ability to come up with the right definitions that help drive results. Stuart defined classes like Gap-P and SPP that have really changed the way people think about counting complexity. He changed the way I did oracle proofs, first trying to create the oracle first and then prove what happens as a consequence instead of the other way around. It was this approach, focusing on an oracle called sp-generic, that allowed us to give the first relativized world where the Berman-Hartmanis conjecture held.

Tuesday, August 22, 2017

The Crystal Blogaversity

A joint post from Lance and Bill

This blog started fifteen years ago today as  "My Computational Complexity Web Log". Bill came on permanently in 2007 after Lance retired from the blog, a retirement that didn't even last a year. We've had over 2500 posts and 6 million page views. We've highlighted great results, honored 100th birth anniversaries, mourned the passing of far too many colleagues and talked about the joys and challenges of being an academic and a theoretical computer scientist.

During the time of this blog, Lance held jobs at four different institutions, several positions in the theoretical computer science community and watched his daughters grow up. Besides his wife, perhaps the only constant in his life is this blog, and no matter how busy things get he still aims to post once a week. Writing keeps him sane.

Bill, who is somewhat of  Luddite, has seen technology change so much around him that he needs something to stay the same. This blog has kept him sane. Or at least more sane.

Computing has seen dramatic changes in the past fifteen years driven by cloud computing, big data and machine learning. Computing now drives society and we've only seen the tip of the iceberg so far. Precious few of these developments are grounded in theory and our community will have a large role to play in understanding what is and isn't possible in this brave new computational world.

Education has changes as well. The number of people majoring in Computer Science has skyrocketed, crashed, and is now skyrocketing again. We teach large lectures with PowerPoint and other technologies for both good and ill. Some people get their degrees online for both good and ill. We comment on all of these developments for both good and ill.

We're not done yet with the blog. We'll keep writing and hope you keep reading. To the next fifteen.

Thursday, August 17, 2017

The World is Not for Me

I wanted to address diversity after the Google memo controversy but that shouldn't come from an old white man. I asked my daughter Molly, a college student trying to set her course, to give her thoughts.

The world is not for me. It never has been, and it never will be. This truth is bleak, but unavoidable. The world does not belong to women.

The possibilities for my life are not endless. The achievements in my sight are not mine to take. If I want them, I have to fight harder, prove more about myself, please people more, defy more first impressions. I’ll have to be smarter, stronger, more patient and more assertive. Every expectation of me has a mirror opposite, fracturing my success into a thing of paradoxes. I know this, and I’ve always known it, to some extent. As I get older, the more I learn and the more I educate myself, the more words I have to describe it.

So you’ll forgive me for not being surprised that sexism exists, especially in such male-dominated fields as technology and computing. You’ll forgive me for calling it out by name, and trying to point it out to those I care about. You’ll forgive me for being scared of tech jobs, so built by and for white men and controlled by them that the likelihood of a woman making a difference is almost none. And you’ll forgive me for trying to speak my mind and demand what I deserve, instead of living up to the natural state of my more “agreeable” gender.

I know this disparity is temporary. I know these fields could not have come nearly as far as they have come without the contributions of many extraordinary women who worked hard to push us into the future. I know that other fields that once believed women were simply incapable of participating are now thriving in the leadership of the very women who defied those odds. And I know, with all of my being, that the world moves forward, whether or not individuals choose to accept it.

I’m so fortunate to live the life I do, and to have the opportunities I have. This is not lost on me. But neither is the understanding that this world was not built for me, and still won’t have been built for me even when the tech world is ruled by the intelligent women who should already be in charge of it. The existence of people who believe genders to be inherently different will always exist, always perpetuate the system that attaches lead weights to my limbs, padlocks to my mouth.

But that doesn’t mean I’ll give up. It’s what women do, because it’s what we have to do, every day of our lives: we defy the odds. We overcome. The future includes us in it, as it always has, and it’s because of the women who didn’t give up. And I’ll be proud to say I was one of them.

Sunday, August 13, 2017

What is unusual about this MIT grad student in Applied Math?

(Thanks to Rachel Folowoshele for bringing this to my attention)

John Urschel is a grad student in applied math at MIT. His webpage is here.

Some students go straight from ugrad to grad (I did that.)

Others take a job of some sort and then after a few years go to grad school.

That's what John did;  however, his prior job was unusual among applied math grad students







He was in the NFL as a professional football player! See here for more about the career change, though I'll say that the brain-problems that NFL players have (being hit on the head is not a good for your brain) was a major factor for doing this NOW rather than LATER.

How unusual is this? Looking around the web I found lists of smart football players, and lists of football players with advanced degrees (these lists were NOT identical but there was some overlap) but the only other NFL player with a PhD in Math/CS/Applied math was

Frank Ryan- see his wikipedia page here. He got his Phd WHILE playing football. He was a PhD student at Rice.

I suspect that a professional athlete getting a PhD in Math or CS or Applied Math or Physics or... probably most things, is rare.  Why is this? NOT because these people are dumber or anything of the sort, but because its HARD to do two things well, especially now that both math and football have gotten more complex. Its the same reason we don't have many Presidents with PhD's (Wilson was the only one) or Presidents who know math (see my post on presidents who knew math: here) or Pope's who are scientists (there was one around 1000 AD, see here).

If you know of any professional athlete who has a PhD in some science or math, please leave a comment on such.

(ADDED LATER- a commenter pointed out Angela Merkel who has a PhD in Physical Chemistry,
is chancellor of Germany, and there is a musical about her, see here.)

(ADDED LATER- some of the comments were for Olympic Athletes and hence not professional and another comment pointed this out. So I clarify: Olympic is fine too, I really meant serious athlete.)

Thursday, August 10, 2017

Wearable Tech and Attention

Remember the Bluetooth craze where it seemed half of all people walked around with a headset in their ear. Now you rarely do.

Remember Google Glass. That didn't last long.

I remember having a conversation with someone and all of sudden they would say something nonsensical and you'd realize they are on the phone talking to someone else. Just by wearing a Bluetooth headset you felt that they cared more about a potential caller than the conversation they were currently having with you.

Google glass gave an even worse impression. Were they listening to you or checking their Twitter feed? [Aside: I now use "they" as a singular genderless pronoun without even thinking about it. I guess an old dog can learn new tricks.]

When you get bored and pull out your phone to check emails or put on headphones to listen to music or a podcast, you give a signal that you don't want to be disturbed even if that isn't your intent. Wearing a Bluetooth headset or Google glasses gave that impression all the time, which is why the technology didn't stick.

What about smart watches? You can certainly tell if someone has an Apple watch. But if they don't look at it you don't feel ignored. Some people think they can check their watch without the other person noticing. They do. I've been guilty of this myself.

What happens when are brains are directly connected to the Internet? You'll never know if anyone is actually listening to you in person. Of course, at that point will there even be a good reason to get out of bed in the morning?

Sunday, August 06, 2017

Should we care if a job candidate does not know the social and ethical implications of their work (Second Blog Post inspired by Rogaway's Moral Character Paper)

Phillip Rogaway's article on the

The Moral character of Cryptographic Work (see here)

brings up so many issues that it could be the topics for at least 5 blog posts. I've already done one here, and today I'll do another. As I said in the first post I urge you to read it even if you disagree with it, in fact, especially if you disagree with it. (Possible Paradox- you have to read it to determine if you disagree with it.)

Today's issue:

Should a department take into account if someone understand the social and ethical issues with their work?

1) I'll start with something less controversial. I've sometimes asked a job candidate `why do you work on X?' Bad answers:

        Because my adviser told me to.

        Because I could make progress on it.

        Because it was fun to work on.

People should always know WHY they are working on what they are working on. What was the motivation of the original researchers is one thing they should know, even if the current motivation is different. If its a new problem then why is it worth studying?

2) In private email to Dr. Rogaway he states that he just wants this to be ONE of the many issues having to do with job hiring (alas, it usually is not even ONE). As such, the thoughts below may not be quite right since they assume a bigger role. But if you want to make something a criteria, even a small one, we should think of the implications.

3) In private email to Dr. Rogaway I speculated that we need to care more about this issue when interviewing someone in security then in (say) Ramsey theory. He reminded me of work done in pure graph theory funded by the DOD, that is about how to best disable a network (perhaps a social network talking too much about why the Iraq war is a terrible idea). Point taken- this is not just an issue in Security.

4) What if someone is working on security, funded by the DOD, and is fully aware that the government wants to use her work to illegally wiretap people and is quite okay with that. To hold that against her seems like holding someone's politics against them which I assume all readers of this blog would find very unfair.. OR is it okay to hire her since she HAS thought through the issues. The fact that you disagree with her conclusion should be irrelevant.

5) What if she says that the DOD, once they have the tech, will only wiretap bad people? (see here)

6) Lets say that someone is working on cute crypto with pictures of Alice and Bob (perhaps Alice is Wonderland and Bob the Builder). Its good technical work and is well funded. It has NO social or ethical  implications because it has NO practical value, and she knows it. Should this be held against her? More so than other branches of theory?

7) People can be aware of the social and ethical issues and not care.

8) The real dilemma: A really great job candidate in security who is brilliant. The work is top notch but has serious negative implications. The job candidate is clueless about that. But they can bring in
grant money! Prestige! Grad Students! I don't have an answer here but its hard to know how much to weigh social and ethical awareness versus getting a bump in the US News and World Report Rankings!

        What does your dept do? What are your thoughts on this issue?



Thursday, August 03, 2017

What Makes a Great Definition

Too often we see bad definitions, a convoluted mess carefully crafted to make a theorem true. A student asked me though what makes for a great definition in theoretical computer science. The right definition can start a research area, where a bad definition can take research down the wrong path.

Some goals of a definition:
  • A great definition should capture some phenomenon, like computation (Turing machines), efficient computation (P), efficient quantum computation (BQP). Cryptography has produced some of the best (and worst) definitions to capture security concerns.
  • A great definition should be simple. Defining computability by a Turing machine--good. Definition computability by by the 1334 page ISO/IEC 14882:2011 C++ standard--not so good.
  • A great definition should be robust. Small changes in the definition should have little, or better, no change in what fulfills the definition. That is what makes the P v NP problem so nice since both P and NP are robust to various different models of computing. Talking about the problems solvable by a 27-state Turing machine--not robust at all.
  • A great definition should be logically consistent. Defining a set as any definable collection doesn't work.
  • A great definition should be easy to apply. It should be easy to check that something fulfills the definition, ideally in a simply constructive way.
A great definition drives theorems not the other way around.

Sometimes you discover that a definition does not properly capture a phenomenon--then you should either change or discard your definition, or change your understanding of the phenomenon.

Let's go through an interesting example. In 1984, Goldwasser, Micali and Rackoff defined $k$-bits of knowledge interactive proof systems. Did they have good definitions?
  • The definition of interactive proof systems hits all the right points above and created a new area of complexity that we still study today. 
  • Their notion of zero-(bits of) knowledge interactive proofs hits nearly the right points. Running one zero-knowledge protocol followed by another using the GMR definition might not keep zero-knowledge but there is an easy fix for that. Zero-knowledge proof systems would end up transforming cryptography. 
  • Their notion of k-bit knowledge didn't work at all. Not really robust and a protocol that output the factors of a number half the time leaked only 1-bit of knowledge by the GMR definition. They smartly dropped the k-bit definition in the journal version.
Two great definitions trump one bad one and GMR rightly received, along with Babai-Moran who gave an alternative equivalent definition of interactive proofs, the first Godel Prize.