Thursday, January 19, 2017

Infinite Series and Markov Chains

There's a wonderful new series of math videos PBS Infinite Series hosted by Cornell Math Phd student Kelsey Houston-Edwards. Check out this latest video on Markov Chains.


She gives an amazingly clear description of why, on a random walk on an undirected graphs, the stationary distribution puts probability on each node proportional to its degree.

Houston-Edwards also relies without proof on the following fact: The expected time to start and return to a vertex v is 1/p where p is the probability given to v in the stationary distribution. Why should that be true?

Didn't seem so obvious to me, so I looked it up. Here is an informal proof:

Let V1 be the random variable that is the expected length of time to get from v back to v. Let's take that walk again and let V2 be the expected length of time to get from v back to v the second time, and so on. Let An=V1+V2+..+Vn, the random variable representing the number of transitions taken to return to v n times. In a Markov chain each Vi is distributed the same so E(An) = n E(V1). 

As n gets large the Markov chain approaches the stationary distribution and will be in state v about a p fraction of the time. After E(An) transitions we should return to v about p E(An) times, where we actually return n times. So we have p E(An)  approaches n, or p n E(V1) approaches n and the only way this could happen is if E(V1)=1/p.

Sunday, January 15, 2017

My REU program/REU's in general/Flyers? Why do some Transcripts...

I run an REU program (Research Experience for Undergraduates) and I would normally urge you to urge undergrads who would benefit to apply to it and present both this link: here and this flyer: here.

I just did that. But I want to talk about REU programs, not just mine. A few points
which can be construed as advise- though its more questions and what I do.

1) Do flyers still have any effect? If there is a bulletin board in a dept that is known for where to look for announcements of summer opps, then YES. Otherwise--- not sure. when I asked this question to a professor I emailed the flyer to she said that at her school they stick flyers in the bathroom stalls where they are sure to get a captive audience.

2) Should you visit schools directly? I have done this; however, most of the applicants find out about it from the NSF REU website.

3) Should students pick projects ahead of time or in the first week? When students apply they list a  set of projects they want to work on (unranked but the Statement of Purpose can say which one(s)  they REALLY want)  so we can start on Day 1. Some of the mentors are in contact with students ahead of time. Other programs have them decide the first week. There are PROS and CONS to both.


4) How to do admissions? I do them myself since there are few enough of them (about 180 for 10 slots- though see next item) and since there are so many criteria to balance I'd rather not get into arguments with other committee members. I will sometimes (for example) say

``John, here are two people who want to work on your project. What do you think of them''

5) Of the 180 applicants about 50 do not have a statement of purpose. For me this is a deal-breaker. Either they were in the middle of applying and got another offer and took it- which is fine, but no reason for me to even look at the application, OR they have a hard time completing things and again, not worth my time. A prof once asked me `But what if they are really good''-- there are plenty of really good applicants who do fill out the statement of purpose.

6) The main activity is research but we have some social activities as well.

7) How big should teams be? We try to avoid teams of 1. We usually have teams of 2 or 3.

8) What about students from your own school? My REU grant tends to urge them to go elsewhere since the NSF likes to have people from NON-research schools, and because I personally think its better for broadening to go elsewhere. Other programs are set up differently.

9) Why do some transcript not have the name of the school on them. Drives me nuts.

In the Summer of 2017 I will be running this program for the 5th time. Feel free to leave questions about how to run one, OR your own experiences, in the comments.


Thursday, January 12, 2017

Guest Post about the first Women in Computational Topology (WinCompTop) Workshop

The first Women in Computational Topology WinCompTop workshop was held in August at the Institute for Mathematics and its Applications (IMA) in Minneapolis, MN.  In total, 27 women participated, ranging from undergraduates to full professors; in addition, five future topologists (children of the participants) attended the various social events scattered throughout the week.

The central goal of this workshop was to establish research collaborations among both junior and senior women in the field, as well as to provide an opportunity for mentoring at all levels.  There were four working groups, each led by a more senior member of the community:

Giseon Heo (University of Alberta) outlined a project that extends one dimensional scale-space persistent homology (a fundamental tool in computational topology) to a pseudo-multidimensional persistence tool that can be applied to a variety of applications.

Nina Amenta (University of California, Davis) posed a problem of producing an explicit representation of a surface S from an input point cloud P drawn from a distribution over S (possibly with some noise).

Yusu Wang (The Ohio State University) discussed a new method of persistence-based profiles to compare metric graphs and outlined that further exploration of what information is captured by persistence-based profiles and understanding their discriminative power would be the focus of their working group.

Carola Wenk (Tulane University) and Brittany Terese Fay investigated the use of topology in map construction and comparison, particularly understanding directed graphs with multiple lanes and overpasses.

The workshop began with each of the senior researchers presenting an overview of their working group’s topic.  After the overview of each project, working groups began to explore their topic; over the course of the week, substantial progress was made in each group.   Each working group will contribute an article to a special AWM/IMA Springer journal, co-edited by the organizers of WinCompTop 2016 (Erin Chambers, Brittany Terese Fasy, and Lori Ziegelmeier).  In addition, many of the participants who attended WinCompTop will meet once again at a special session of the AWM Research Symposium in April (see here).

The workshop also had several outings and other social events, including a poster session where over a dozen posters were presented, a panel on work-life balance, an open problem session, and several receptions or banquets. These events let participants come together as a group, establish future collaborations, and connect with one another. In addition to formally scheduled outings, informal activities such as a marshmallow roast one evening, group dinners, and many other gatherings happened throughout the week.

What we (the organizers) have learned, and some questions for the community:
How many women in Field X does it take to justify creating a “Women in X” network?  (Or, more generally, an in X network?)  This question was brought to our attention by Bill Gasarch (thanks for letting us post on your blog, BTW).  We started this community as a listserv over two years ago (by the way, visit here if you’d like to join: Here.  Today, we have over 100 subscribers, several of whom are not women.  Regularly, opportunities are posted through this listserv, and lively discussions sometimes ensue (for example, we recently had a lengthy thread listing all of the researchers under whom one might be able to pursue a Ph.D. in computational topology).  This network was started by just a handful of us who decided that there needed to be a more formal way for junior researchers to seek advice and for organizers of events to find diverse speakers.  So, perhaps the answer to Bill’s question is: just a handful of people, and you’ll be surprised how quickly things grow.

Acknowledgements

Finally, we want to end this post with a note of gratitude.  We thank NSF,
which funded the travel and local expenses for most of the participants (NSF DMS grant #1619908).  We thank Microsoft Research for providing a generous donation, which funded the social events and travel for one of our international group leaders.  Thanks also to AWM, which provided logistical support and advice for the event, and financial support for the upcoming follow-up events.  Most enthusiastically, we thank the IMA for all of their support of both time and resources.  The IMA donated the meeting spaces, breakfasts, as well as poster-printing, all in-kind.  Last but not least, we thank every participant of WinCompTop 2016.  We’ll see you again in 2018!

If you have any questions or comments about our experience organizing WinCompTop, we encourage you to contact us:

Erin Chambers erin.chambers@gmail.com

Brittany Terese Fasy brittany@fasy.us

Lori Ziegelmeier lziegel1@macalester.edu





Tuesday, January 10, 2017

Babai Strikes Back

"So it is quasipolynomial time again"
Short Version: Babai fixed his proof.

In November 2015 László Babai announced a talk "Graph Isomorphism in Quasipolynomial Time" at the University of Chicago, an incredible breakthrough for this important problem. The algorithm is a tour-de-force masterpiece combining combinatorics and group theory. The result sent shock waves through the community, we covered the result as did most everyone else, despite Babai's disclaimer that the result was not yet peer reviewed.

Robin Thomas at Georgia Tech immediately suggested we get Babai down to Atlanta to talk on the paper. Babai's dance card filled up quickly but he eventually agreed to give two talks at Georgia Tech. January 9-10, 2017, right after the Joint Mathematics Meeting in Atlanta. Robin scheduled the 25th anniversary celebration of our Algorithms, Combinatorics and Optimization PhD program around Babai's talks.

Around 4 AM last Wednesday we got a lengthy email from Babai with the subject "new development" and the first line "The GI algorithm does not run in quasipolynomial time." The email explained the new running time, still a huge breakthrough being the first subexponential time algorithm for graph isomorphism, but not as strong as before. Harald Helfgott had discovered the problem while going through the proof carefully preparing for a Bourbaki seminar talk on the result. The email asked if we still wanted him to give the talk (of course we did) and instructions for removing "quasipolynomial" from his talk abstracts. That email must have been very hard for Babai to write.

Babai posted an update on his home page which I too quickly tweeted. News spread quickly in blog posts and by Thursday an online article in Quanta titled Complexity Theory Strikes Back. Scott Aaronson picked a lousy day to release his new book-length survey on the P v NP problem.

On top of all this snow in Atlanta threatened to cancel the Monday talk. But the snow never came and Babai started his talk at 4:30 PM to a packed room. Without telling any of us beforehand, an hour earlier he had posted an update to his update and announced it in his talk.


We were watching history. From the talk I tweeted the new news though Bill Cook, also in the audience, beat me to the punch. Babai went on to describe the issue, an error in the analysis of the running time in the recursion, and the fix, basically a way to avoid that recursive step, but I can't do it justice here. At the end he proclaimed "So it is quasipolynomial time again". And so it was.

Today Babai talked about the group theory behind the algorithm as though there was never an issue in the first place.

Thursday, January 05, 2017

Learning About Learning

Yesterday Scott Aaronson released his sweeping new P v NP survey. Babai gave an update on graph isomorphism, in short while he still has the first subexponential time algorithm for GI, he no longer claims quasipolynomial-time. We'll have more on graph isomorphism next week.

Having seen the power of neural networks for learning, how do they actually work? Over the break I decided to catch myself up. I worked through Michael Nielsen's online book Neural Networks and Deep Learning chosen mainly because he co-authored one of the great books on quantum computing followed by the beginning of the TensorFlow tutorial. To try out code I downloaded Python and TensorFlow and used PyCharm as my IDE (free for academics). I should really learn GitHub since that holds all the code samples. Where were all these cool tools when I was a kid?

So what did I learn? Here is my high level understanding, but read the above materials to get a fuller picture. A neural network is just a weighted threshold circuit, though instead of precise threshold functions you use more continuous and differentiable functions like sigmoid or a rectifier. If you fix the weights and biases (the negation of the threshold value), the network computes a function from input to output, for example from images of digits to the digits themselves. Typically you have a sample of labelled data and you can create an error function to see how well your network computes the solution. Fix the labelled data and now you have a new function from the weights and biases to the error.

You want to choose weights and biases to minimize the error. The general approach is gradient descent, improve a given solution by moving a small distance in the opposite direction in the gradient and repeat. I had naively thought you estimated the gradient numerically but in fact the gradient is computed symbolically using a dynamic programming-like algorithm called backpropagation based on the chain rule for partial derivatives.

Convolution nets has a special first layer that captures features of pieces of the image. Recurrent neural networks allow feedback loops.

Nielsen shows how from scratch to build and train a neural net. In TensorFlow you just design the network and then it automatically computes the gradient and has a highly optimized algorithm that can be run on GPUs or specialized hardware to minimize the error.

What caught my eye is how much of an art machine learning still is. How many nodes should you have in your network? How many levels? Too many may take too long to train and could cause overfitting. Too few and you don't have enough parameters to create the function you need.

What threshold function do you use and how to you aggregate results? Lots of various tricks to avoid overfitting and improve speed of optimization. There's no fixed procedure to choose the parameters, though you can test how well you do. So lots of trial and error to learning and experience (which I don't have) certainly helps.

So we have amazingly powerful learning tools but for now we still need humans to learn. For now.

Monday, January 02, 2017

Predictions for 2017

Lance's Complexity Year in Review, posted the last week of the year (lets hope that P vs NP is not resolved on Dec 31) is a tradition that goes back to 2002.  Bill posting predictions for the coming year is a tradition that goes back to 2016. Here is the last one: here.  My predictions were not very good- all of those that came true were obvious (P vs NP will not be resolved, CS enrollment will go up). My biggest goof- that Hillary Clinton would beat .. Ted Cruz, by accusing him of being born in a foreign country.

However, being wrong never stopped me before, so here are my predictions for 2017

1) Some people think Trump be more presidential once he takes office. Some point to Checks and Balances in the System - but the Congress is ruled by his party. Some point to the media as a watchdog. e.g:  here. A more accurate picture is  due to John Oliver here. Some say Trump is a closet liberal. But I doubt his true beliefs, if he has any, matter. Its going to be bad. I second Scott Aaronson's call to not normalize this:  here.

2) Not a prediction but a thought: Prior arguments against Trump were countered with `Well Hillary is just as bad'  They can't use this anymore. The following absurd conversation might happen:

NEWS: Today some democrats and John McCain questioned Donald Trumps nominee, Rex Tillerson for Sec of State, about his past dealings with Russia and Putin.

 FOX NEWS: But Hillary was the worst Sec of State ever! Bengazi!


3) I will be asked to review at least one paper claiming P=NP (or P NE NP or it won't be clear what they are saying) and at least one paper claiming to determine a Ramsey or VDW number.  They will be garbage. Cranks are  getting into more sophisticated areas so others may be asked to look at ``solutions'' to open problems in harder areas of math. The Navier-Stokes equations (A Millennium problem, see  here)  might be a good area for cranks since they might get out some numbers and think they've solved it.  I'm glad I'm not in that area.

4) Our Popular Posts links (which is determined by a program, not by us) will continue to have some of our most recent posts AND my very old post on Russell and Whitehead using 300 pages to prove 1+1=2. Why is that post seen as being so popular? I doubt it IS that popular. So--- whats going on?

5) Recall that Josh Alman and Ryan Williams showed that one method for lower bounds prob won't work  here . There will be more results that rule out techniques.

6) n-person envy-free cake cutting can now be done with number-of-cuts TOW(n).
There will be better upper bounds or some lower bounds on this proven this year.

7) There will be a big breakthrough on derandomization- possibly L=RL.

8) There will be a big data breach.

9) Some minor celebrity will die the last week of the year and hence not make either the
`who died in 2017' lists, nor the `who died in 2018' lists. In  2016 this happened to William Christopher. Why do people make the `end of the year lists' before the year ends?

10) Fake News will become worse and worse.  After Pizzagate there was NO apology or regret from the people who spread the false news.

11) Fake Journals, Schools, and accreditation agencies will continue to grow.

Thursday, December 29, 2016

Complexity Year in Review 2016

Paper of the year goes to A Discrete and Bounded Envy-Free Cake Cutting Protocol for Any Number of Agents by Haris Aziz and Simon Mackenzie. Might not seem like a traditional complexity result but cake cutting is a computational process with a desired set of properties and this papers settles a long standing open question. Bill posted about the paper back in June.

Some other nice papers from 2016: Hadamard not so rigid after all and not usable for Valiant's lower-bound program, 2-source extractors from low-entropy sources which get near perfect randomness from two weak sources, query complexity separations, deterministic, randomized and quantum, in two wonderful constructions beautifully presented at STOC, space-efficient subset-sum,and learning algorithms from natural proofs.

2016 will go down as a watershed year for machine learning with AlphaGo, huge advances in translation, self-driving cars and with tools like TensorFlow out in the open we'll truly see learning everywhere. Prediction didn't have such a great year with bad calls on Trump's nomination, Brexit and Trump's election. Machines learning humans still has a way to go.

We thank our guest posters Molly Fortnow, MohammadTaghi HajiAghayi, Samir Khuller and Avi Wigderson.

We remember Erich Bloch, Hartmut Ehrig, Carrie FisherRūsiņš Freivalds, John GlennDavid Johnson, Marvin Minsky, Peter Naur, Seymour Papert, Hillary Putnam, Lloyd Shapley and Boris Trakhtenbrot.

As the surprises of 2016 lead us into an uncertain 2017 let me leave with my usual advice: When in a complex world, best to keep it simple.

Monday, December 26, 2016

Hidden Figures

Yesterday in our tradition of seeing math movies on Christmas, we saw Hidden Figures, the story of African-American women who worked as "computers" at NASA in 1961 Virginia in the last vestiges of "separate but equal". The movie focuses on three women, Katherine Johnson, Dorothy Vaughn and Mary Jackson as they dealt with and crossed racial and gender boundaries. But this isn't just a movie you should see, rather a movie you will enjoy watching and I highly recommend it when it goes into a wide release in the US on January 6. Some minor spoilers ahead.

Beyond the struggle, Hidden Figures does work as a math movie. The major storyline follows the group of mathematicians who compute the trajectories of the Mercury flights with Katherine Johnson playing a main role in figuring out how to pull John Glenn out of an elliptical orbit into a parabolic safe entry back to Earth.

The movie also serves up lessons about computers. The new IBM mainframe comes and threatens the need for human computers, women (here segregated into two groups) who did the tedious manual calculations that NASA relied on. In the movie Dorothy Vaughn recognizes the threat and retrains her team to learn Fortran, perhaps a parable for how technology changes jobs today.

The majority of the audience for the sold-out theater were African-American women and they laughed and cheered as the heroines of the movies succeeded. While we have made much progress in the last 50 years we still have far to go.