Sunday, October 28, 2018

If P=NP then we HAVE an alg for SAT.

I am writing up the result of my survey of peoples opinion of P vs NP (it will be in a SIGACT News, in Lane's Complexity Column, in 2019.) Some  people wrote:

                          P=NP but the proof will be nonconstructive and have a large constant.

Large constant could happen.

If by nonconstructive they mean not practical, then yes, that could happen.

The following does not quite show it can't happen but it does give one pause:  an old result of Levin's shows that there is a program you could write NOW such that if P=NP then this program decides SAT except for a finite number of formulas (all of which are NOT in SAT) and can be proven to work in poly time (I will later give three pointers to proofs). The finite number of formulas is why the people above may still be right. But only a finite number- seems like a weak kind of nonconstructive.

Since I am teaching crypto I wondered about Factoring. An old result of Gasarch (I proved it this morning -- I am sure it is already known) shows that there is a program you could write NOW such that if Factoring is in P then this program factors a number ALWAYS (none of this finite exception crap) and can be proven to work in poly time. Even so, the algorithm is insane. If someone thought that factoring in P might be nonconstructive, my construction disproves it in such an absurd  way that  the notion that factoring could be in P nonconstructively should be taken seriously but not literally. There should be a way to say formally:

I believe that FACTORING is in P but  any poly-time algorithm is insane (not even looking at the constants) and hence could never be implemented.


Not sure how to define insane.

Three pointers:

Stack Exchange TCS:  here

Wikipedia: here

My slides (also include factoring result): here

Question: Can the SAT result be improved to be an algorithm that is ALWAYS right? Is there a way to show that it can't be (unless, say P=NP).

Question: What can be said about Graph Isomphism in this context? The proof for SAT is easily adpated to this case (all we used about SAT was that it was self-reducible). But can we get our GI algorithm to never make a mistake?

Thursday, October 25, 2018

Good Results Made Meaningless

Sometimes you see a beautiful theorem A that you love to talk about. Then another beautiful theorem B comes around, making the first one meaningless since B trivially implies A. Not just a mere extension of A but B had a completely different proof of something much stronger. People will forget all about A--why bother when you have B? Too bad because A was such a nice breakthrough in its time.

Let me give two examples.

In STOC 1995 Nisan and Ta-Shma showed that Symmetric logspace is closed under complement. Their proof worked quite differently from the 1988 Immerman-Szelepcsenyi nondeterministic logpsace closed under complement construction. Nisan and Ta-Shma created monotone circuits out of undirected graphs and used these monotone circuits to create sorting networks to count the number of connected components of the graph.

Ten years later Omer Reingold showed that symmetric logspace was the same as deterministic logspace making the Nisan-Ta-Shma result an trivial corollary. Reingold's proof used walks on expander graphs and the Nisan-Ta-Shma construction was lost to history.

In the late 80's we had several randomized algorithms for testing primality but they didn't usually give a proof that the number was prime. A nice result of Goldwasser and Kilian gave a way to randomly generate certified primes, primes with proofs of primeness. Adleman and Huang later showed that one can randomly find a proof of primeness for any prime.

In 2002, Agrawal, Kayal and Saxena showed Primes in P, i.e., primes no longer needed a proof of primeness. As Joe Kilian said to me at the time, "there goes my best chance at a Gödel Prize".

Monday, October 22, 2018

Please Don't call them Guidance Counselors

As I mentor many HS students I was recently in email contact with the HS contact for projects and I noticed that the sign off was

Allie Downey
Guidance School Counselor

This piqued my interest so I emailed her asking why the cross out.

She was delighted to answer! Here is her email:

Thank you, but “Guidance Counselor” is an outdated term.  It is from a time before there was the American School Counselor Association, before the profession required at least a  Masters degree, and before there was a nationally recognized comprehensive school counseling program. Guidance is a service; school counseling is a program.  This is a great website that explains it even better here

Thank you for taking the time to ask.  Anyone in the profession today prefers the term School Counselor and I always appreciate when people inquire.

I asked her further about the difference and here is what she said:

Everyone  in a young person’s life offers guidance in some fashion.  School counselors still provide guidance classroom lessons (such as bully prevention and character development), but we do so much more.  We help students develop their academic abilities and study skills.  We assist them and their families in college and career planning.  We teach coping skills so students can guide themselves.  We ask questions to help these young adults discover the answers on their own.  We help students learn how to advocate for themselves.  We console.  We mediate.  We learn and adapt with changing climates.  We work with families, faculty, and community members to make sure school is a safe place for students to learn and grow. And this is all before the lunch bell. It is an amazing profession and I am proud to call myself a school counselor.

Thursday, October 18, 2018

A New Cold War?

Imagine the following not-so-unrealistic scenario: The US-China trade war deepens leading to a cold war. The US blocks all Chinese citizens from graduate school in the US. Visas between the two countries become hard to get. The Chinese close off their Internet to the West.
If things continue along this path, the next decade may see the internet relegated to little more than just another front on the new cold war.
I wouldn't have thought it in our hyperconnected age but we are in spitting distance of going back to the 60's. What would this all mean for science and computing?

Let's go back to the original cold war between the Soviet Union and the US roughly from 1947-1991. We didn't have a significant internet back then (though we can thank the cold war for the internet). One had to assume that the government read mail to/from USSR. Travel to and from the USSR and the Eastern block to the west was difficult. Academic research did cross over but only in dribs and drabs and we saw two almost distinct academic cultures emerge, often with duplication of effort (Cook/Levin, Borodin/Trakhtenbrot, Immerman/Szelepcsényi).

Beyond the limited communication came the lack of collaboration. Science works best with open discussion, sharing of ideas and collaborating with each other. It took a Russian and an American working together to give us Google.

No cold war in this age can completely cut off ideas flowing between countries but it can truly hamper knowledge growth. We can't count on US superiority with China already ahead of us in areas like high-speed trains, renewable energy and mobile payments.

The cold war did have an upside to science: The competition between the East and the West pushed research growth and funding on both sides. We already see arguments for quantum computing and machine learning by the necessity of staying ahead of the Chinese. But science funding should not be driven by fear but by curiosity and its indisputable long-term effects on our economy.

We must keep the flow of information and people open if we want science and technology to flourish to its fullest, from students through senior researchers. Don't let a new cold war bring us back to the days of separate communities, which will fail to produce the breakthroughs we will never know about.

Sunday, October 14, 2018

Practical consequences of RH ?

When it seemed like Riemann Hypothesis (RH) might be solved (see Lipton-Regan blog entry on RH  here and what it points to for more info) I had the following email exchange with Ajeet Gary (not Gary Ajeet, though I will keep his name in mind for when I want to string together names like George Washington, Washington Irving, Irving Berlin,  with the goal of getting back to the beginning) who is an awesome ugrad at UMCP majoring in Math and CS.

Ajeet: So Bill, now that RH has been solved should I take my credit cards off of Amazon?

Bill: I doubt RH has been solved. And I think you are thinking that from RH you can prove that factoring is in P. That is not known and likely not true.

Ajeet: What are my thoughts and why are they wrong?

Bill: What am I a mind-reader?

Ajeet: Aren't you?

Bill: Oh, Yes, you are right, I am. Here is what you are confusing this with and why, even if you were right you would be wrong.

Ajeet: It just isn't my day.

Bill: Any day you are enlightened is your day. Okay, here are the thoughts you have

a) From the Extended RH (a generalization of RH) you can prove that PRIMES are in P. (This algorithm is slow and not used. PRIMES has a fast algorithm in RP that people do use. Primes was eventually proven to be in P anyway, though again that is a slow algorithm). Note- even though we do not know if ERH is true, one could still RUN the algorithm that it depends on. ERH is only used to prove that the algorithm is in P.

b) There was an episode of Numb3rs where they claimed (1) RH implies Factoring in P-- not likely but not absurd (2) from the proof of RH you could get a FAST algorithm for factoring in a few hours (absurd). I say absurd for two reasons: (i) Going from basic research to application takes a long time, and (ii) See next thought

c) If (RH --> factoring easy) then almost surely the proof would present an algorithm (that can be run even if RH has not been proven) and then a proof that RH --> the algorithm's run time is poly. But I wonder -- is it possible that:

RH--> factoring easy, and

The proof does not give you the algorithm, and

 if you had a proof  or RH then you COULD get the algorithm (though not in a few hours).

I doubt this is the case.

Ajeet: So are there any practical consequences of RH?

Bill: Would you call better bounds on the error term of the prime number theory practical.

Ajeet: YES!

Bill: GREAT! For more on RH see here

Thursday, October 11, 2018

2018 Fall Jobs Post

As we do every year at this time, we help you find that perfect academic job. So who's hiring in CS this year? Perhaps we should instead follow the advice of John Regehr.
For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

Even if you don't see an ad, almost surely your favorite university is looking to hire computer scientists. Check out their website or email someone at the department.

And (selfish plug) Georgia Tech is looking to hire in theory this year.

Some generally good advice: Make sure you have strong letter writers who know your research well, best if one or two of them come from outside your university. Put all your papers and materials on your website and make sure your Google Scholar page is accurate. Put effort into your job talk and remember you need to sell you research to non-theorists. Good if you can connect to other areas especially machine learning, data science or cybersecurity. Quantum seems hot this year.

Above all have fun! In this computational and data driven world we live in, there is a great job out there for you.

Monday, October 08, 2018

A New ACO Center (guest post by Vijay Vazirani)

Guest Post by Vijay Vazirani

                                                          A New ACO Center!

Last week, I helped launch an ACO Center (Algorithms, Combinatorics and Optimization) at my wonderful new home, UC Irvine. There are only two other such centers, at CMU and Georgia Tech (29 and 27 years old, respectively). My personal belief is that there will be more in the future. Let me justify.

When I joined Georgia Tech in 1995, my research was centered around approximation algorithms, a topic that resonated with its ACO Center. I was able to build on this interest in numerous ways:  by offering new versions of courses on this topic as new results emerged, attracting to GT, for the first time, a large number of top theory PhD students who went on to produce stellar results and start impressive careers of their own. Course notes accumulated over the years eventually lead my book on the topic in 2001. Right after that, I switched to algorithmic game theory, and again ACO became the center of that activity, this time resulting in a co-edited book which had a huge impact on the growth of this area. In short, ACO gave me a lot!  In turn, I believed in it and I worked for it wholeheartedly.

I still believe in ACO and I feel it is very much relevant in today’s research world. Similar to the other two ACOs, our Center at UCI also exploits the natural synergies among TCS researchers from the CS Department, probability and combinatorics researchers from the Math Department, and optimization researchers from the Business School. Additionally, our Center has grown well beyond these boundaries to include a highly diverse collection of faculty (e.g., from the prestigious  Institute for Mathematical Behavioral Sciences) whose common agenda is to utilize the “algorithmic way of thinking”, which is set to revolutionize the sciences and engineering over the course of this century, just as mathematics did in the last. The Center website has further details about its vision and activities.

Many universities are in a massive hiring mode today (especially in CS), e.g., UCI  plans to hire 250 new faculty over the next five years. Centers such as ours present the opportunity of hiring in a more meaningful manner around big themes. They can also be instrumental in attracting not only top students but also top faculty.

A center of excellence such as GT’s ACO does not simply spring up by itself; it requires massive planning, hard work, good taste and able leadership. For the last, I will forever be indebted to Robin Thomas for his highly academic, big vision, classy leadership style which was the main reason ACO remained such a high quality program for so long. Moving forward, will we stay with three ACO Centers or will there be more? I believe the latter, but only time will tell.

Saturday, October 06, 2018

John Sidles, Mike Roman, Matt Howell, please email me/hard to get emails of people

John Sidles, Mike Roman, Matt Howell : please email me. at gasarch@cs.umd.edu (my usual email)

I need to ask you about some comments you left on the blog a while back (or emailed me -- I forget which, but I can't find your emails if you did email me). I need you to contact me SOON!

When you do I will tell you whats up and why I decline to say here what this is about.

For my other readers -- it is nothing controversial.


How hard is it to find people's emails on the web?

Sometimes it takes less than 5 minutes

Sometimes it is impossible.

Sometimes I get it by asking someone else who knows, or knows who to ask... etc.

It is rare that more time on the web helps. I do not think I ever spend more than 5 minutes and then found it. I have sometimes used linked-in. I don't have a Facebook account (I was going to put in a link to the latest Facebook privacy breach, but (1) by the time you read this they may have had another one, (2) you all know about it, and (3) when I typed in `Facebook Scandal' to Google I got the Facebook page for the TV show Scandal.)

Should people make their emails public? I can see why one does not want to. The old saying is that if you owe people money you want to be hard to find, but if people owe you money you want to be easy to find.

Contrast email to what people DO put online. A few years ago I needed someone's email address. I found his website. From his website I found out the exact day he lost his virginity. Okay... Didn't need to know that. But I still could not find his email address.  I later asked someone who asked someone etc. and got it. But I was struck by what was private and public. This is NOT a complaint (though I wish it was easier to fine email addresses) just an observation.

Thursday, October 04, 2018

Google added years to my life

If you google

gasarch

you used to  get the following:   here

Please go there and notice how old they say I am.

Okay, you are back. You may have noticed that they say I am 68. Gee, I don't feel 68 (I feel younger).

I have no idea how Google got my age wrong.

0) I found about this when I saw my age in an article about the Muffin problem. The article is here. I had been in contact with the author earlier so it was easy to contact him, assure him that I appreciate his throwing scammers and spammers off of my trail by giving me the wrong age, but I wondered why he chose 68. He then apologized (which was not needed) and pointed me to the google thing.

1) My age was  not on my Wikipedia page. Nor was my birth year.

2) I do not recall every telling Google my age -- but then again, Google knows what I search for and perhaps deduced an incorrect age from that (I've been watching a very old Western, Maverick, lately, which may have fooled them. So my plan is working!)

3) Google thinks I published with Hilbert (see here or here) so that would make them think I am 68 years old. Hmm, still to young. If I was a 10-year old math genius in 1938 (Hilbert died in 1943 but since I am not a 68 year old math genius I chose numbers to make it easy) and published with
him then, then I would now be 80. Not 68. So that is not the answer.
(Side question- are any of Hilbert's co-authors still alive?)

Seriously, if anyone has any ideas why Google had it wrong, let me now

4) Lance was outraged at this and hence put my birth year on my Wikipedia page thinking that
would fix it. (I was not outraged, just amused.)

5) It was taken off my page since Lance couldn't prove it.

6) Lance and I  mentioned my age in a blog post and that was proof enough. So our blog is a primary
source. We should use this sparingly -- with great power comes great responsibility. (See here for more on that theme)

7) Other weird internet stuff: What does it take to get a Wikipedia Page? A Nobel Prize in Physics
helps. See: here.


Monday, October 01, 2018

Muffin Math

Lance: It's Friday afternoon and the Dagstuhl workshop has ended. We have some time before we need take off so how about one final typecast.

Bill: Always a good idea.

Lance: First the exciting news, Nitin Saxena won the Shanti Swarup Bhatnagar prize for 2018,
Nitin Saxena
according to the many Indians at Dagstuhl, the most prestigious science prize in the country. The awards were announced on Wednesday during the workshop. He's the S in AKS.

Bill: That's really impressive. He was only two-years old when AKS had log-depth sorting networks.

Lance: Bill, you moron. You're thinking of Ajtai-Komlós-Szemerédi. I'm talking Agrawal-Kayal-Saxena Primes in P, topic of my second ever blog post. Nitin, an undergrad at that time, didn't just sit on his laurels--he has had awesome results on algebraic circuit complexity that truly justify this prize.

Bill: Well congrats to Nitin. Moving on, let's talk math.

Lance: We're at Dagstuhl so we have to call it computer science.

Bill: Ronen Shaltiel gave a great but depressing talk, Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs.

Lance: Nobody sings the Dagstuhl Blues.

Bill: Suppose you had a hard function and want to covert it to a harder function, known in the biz as hardness amplification. For constant depth circuits we have hardness results but no known process for amplification. For larger classes, like constant depth circuits with threshold gates, we do know ways to amplify.

Lance: But we have not hardness results there? Where are you going with this?

Bill: Ronen put it nicely, "We can only amplify hardness where we don't have it". Ronen and his colleagues proved results along those lines. Lance, does that depress you.

Lance: Not as much as the sprained ankle that made me miss that talk. My turn to pick a favorite talk. I loved Michael Forbes Hitting Sets for the Closure of Small Circuits. You take algebraic circuits with a parameter epsilon and take the limit as epsilon goes to zero. Forbes and Amir Shpilka show a PSPACE algorithm to find small sets containing non-zeros of these functions. These kinds of functions are studied in the GCT approach to lower bounds.

Bill: What lower bounds is this aiming to solve?

Lance: Showing the computation difference between the determinant and the permanent.

Josh Alman: You've left out the most exciting part of the conference.

Bill and Lance: So Josh, what was that?

Josh Alman: The world debut debut performance of Stud Muffin and Smilin' Sam singing "Do You Work on Muffin Math?"



Lance: That awesome duo looks familiar Bill. Where I have seen them before?

Bill: That Sam can really tickle the ivories.

Lance: And Stud was definitely in the room.

Bill: On that note, take us out.

Lance: In a complex world, keep it simple.