Thursday, June 27, 2013

Friends Don't Let Friends Carpool

The AAA foundation measured cognitive distraction while driving and reported that having a passenger in the car is as dangerous as using a cell phone. On a scale of 1 to 5, a handheld cell phone caused a distraction level of 2.45, a passenger 2.33 and a hands-free phone 2.27. On top of this, distraction causes risk to a passenger as well as a driver, whereas the other side of the cell phone conversation can't be harmed by a driver's distraction.

Since the popular media ignores this risk, as a public service I present some guidelines:
  1. Avoid carpooling whenever possible. While there are some advantages (less traffic, pollution and loss of natural resources), it is worth putting lives of the driver, passengers and others at extra risk?
  2. If you do carpool, do not talk to each other except in case of emergency.
  3. If you need to talk, pull over to a safe place and turn off your engine before engaging in conversation.
Car manufacturers must share some of the blame by building cars with multiple seats and not physically separating the driver from the other passengers.

In the same study, the AAA foundation rated solving difficult math and verbal tasks at the top distraction level of 5. So some words of advice particularly for readers of this blog
Don't Drive and Derive

Monday, June 24, 2013

Quantum Tecniques/Gen Functions- don't be afraid of new techniques

Ronald de Wolf gave a GREAT talk at CCC on the uses of Quantum techniques to Classical Problems. He made the analogy of using the Prob Method to prove non-prob results. This reminded me of the following false counterarguments I've heard about new techniques:

  1. The Prob Method: Isn't that just counting?
  2. Kolg complexity: Isn't that just the Prob Method?
  3. Information complexity: Isn't that just Kolg complexity?
  4. Counting: Isn't that just Information Complexity?
In all of the cases above the complaint is idiotic--- while one COULD translate some (all?) proofs using Prob Method to Counting, it is easier to think in terms of Prob. The translation would be harder than just getting used to thinking probabilistically. By coincidence I spend some of my time at CCC looking for simple examples of generating functions where it would be difficult to do it any other way. I found one and liked it so much that I did a write up FOR YOU MY READERS! I suspect that it COULD be done using just algebra (or something) but you wouldn't want to. Here is the theorem and a link to my write up:
(Schur's Theorem) Let a1,a2,...,aL be denominations of coins such that no number ≥ 2 divides all of them. Then, for large n, the number of ways to make change of n cents is
nL-1/((L-1)! a1 a2 ... aL) + O(nL-2)
For full proof see here. My writeup is based on that in Wilf's book generatingfunctionology (the title page really does use a small g for the first letter).


The above was my INTENDED POST. However, when I showed the Gen-function proof of Schur's theorem to some students, one of them, Sam (a HS student), came back the next day with a purely combinatorial proof. It was not completely rigorous but I am sure that he and most of my readers, could make it so without too much effort. While having two proofs (Gen-function and Combinatorial) is MORE enlightening for me and for my readers, it does dampen my point that this is a theorem for which the gen-function proof is easier. I COULD argue that the gen-function proof did not require as much cleverness, or that once you put in the rigor it is harder, but I don't really have confidence in those arguments. I include the combinatorial proof in the writeup pointed to above. Which proof is better? A matter of taste. However, I hope you enjoy both of them!

Thursday, June 20, 2013

Automate Me

An economist friend asked me if there were still productivity gains to be had for office workers (like us). After all, we have email, social networks, skype and other easy ways to connect with everyone not to mention search for everything. Most tasks are pretty straightforward to do online. How much easier can it get?

There are some obvious answers to his question, such as better automated filtering of all the information thrown at us. But here's what I really would love to see--an automated electronic me.

I have about 115000 email conversations in Gmail not counting spam. Google must have tens or hundreds of billions of emails from everyone combined.

So Google can learn both how many emails are typically answered and also my particular email style. So when I hit reply, Gmail should be able to pre-fill a reasonable reply. I can edit as needed and then send. Saves me much time.

Of course Google will learn from the changes I make and get more accurate each time. After a while I can trust Gmail just to answer a subset of my email. After a while it can answer most of my email. In the future Google can referee papers, write my blog posts and prepare my class lectures.

I can hide out and proof theorems while Google does everything else for me. Until Google proves its own theorems and then I'm just out of a job.

Monday, June 17, 2013

Fraud or not ?

For each of these, are they frauds?

  1. The Turk was a chess playing ``computer'' (around 1770) that was later discovered to be cheating--- a human made the moves. As Ken Regan knows well, we now have the opposite problem- humans who cheat by having a computer make the moves. Note that the Turk still played an excellent game of chess and hid the human element. This IS an achievement--- just not the one people wanted. Fraud? Yes
  2. I once heard a rumor (NOTE- this may not be true, that's why its called a rumor) that Hybrid cars get good gas mileage NOT because of the battery but because in their effort to get good mileage they rethought other things like the aerodynamics and how the gas powers the car. If I buy a hybrid car that gets 45 miles and hour but then find out that it gets this NOT because of the battery, but because of really really good enginnering- was I cheated? My sense is NO since I wanted good gas mileage. I may wonder why I need to replace the battery, or even if I need to. Fraud: I'll say NO but its certainly debatable.
  3. Someone sells a single-purpose quantum computer to factor numbers and it works REALLY WELL but later it is discovered that it didn't use quantum at all(!)---it instead used a new classical algorithms (e.g., an extension of the Number field Sieve)--- would the buyers consider themselves cheated?
    1. If the buyers were people who just want to factor really large numbers then perhaps they wouldn't care.
    2. If the device was meant to fool granting agencies or venture capatilists to fund more quantum, then it is fraud. One may wonder why the device-maker didn't just apply for funding in crypto.
    3. If the buyer is an academic who then writes an article about how quantum computing is finally practical, when the truth is discovered he may have his credibility (unfairly?) tarnished.
  4. What if someone had a quantum computer that factored really well but was advertised as a really good classical algorithm that used hard number theory? Somehow that seems very funny to me as a scenario so I won't even ponder fraud or not.
  5. I have heard that the current quantum computers that do such miraculous things as factor 15 (darling says `factor 15? I could do that without breaking a sweat') or find R(3) (I always thought it was 6 and now I know!) may not be ``really quantum'' . This is problematic since nobody really wants to factor 15 or find R(3)--- that is, there is no analog to the people who want good gas mileage or the people who want to factor large numbers in my two examples above. These devices are JUST for demonstration purposes. If its not quantum, its not demonstrating anything. Fraud? Yes, but are they really fooling anyone?

Thursday, June 13, 2013

The Internship

Last weekend I took my teenage daughters to see The Internship, the Vince Vaughn-Owen Wilson vehicle where they play two forty-year old interns at Google. It basically follows the standard underdog story Vince Vaughn so greatly spoofed in Dodgeball.

We went since most of the Google scenes were filmed at Georgia Tech last summer, with the climatic final meeting filmed in the atrium of the Klaus building that houses the School of Computer Science.

The movie was at best mildly amusing and not too often do you see an Emacs vs Vi discussion in a major motion picture. Mostly the movie played as an homage to Google, what a wonderful magical place it is and all the great things they do for the world. To some extent that worked: Both of my daughters came out of the movie wanting to work at Google.

Larry Page, talking about the movie said "The reason we got involved with the movie ‘The Internship’ is that computer science has a marketing problem. We're the nerdy curmudgeons." I do think CS has a marketing problem, though recently of a very different nature.

The US government is using big data as big brother. The US-China discussions on cyber attacks remind me of the US-USSR talks on nuclear weapons in the 70's. Let's not mention how some people believe computers are destroying jobs and widening the gap between the haves and have-nots.

But of course I remain very bullish on computer science and the great things we can achieve with computing. And sometimes it takes silly movies like The Internship to drive that point home.

Tuesday, June 11, 2013

STOC: Some NON-radical ideas

At the STOC business meeting Joan Feigenbaum (PC chair) raised some very good points. There was no real discussion (or perhaps the burning car was the discussion). Here are the issues and some thoughts as I see them. Note that I am not speaking in any official capacity. I speak of STOC but many of my comments apply to other conferences.


What is the purpose of STOC? Initially it was to help spread knowledge of the latest results, through both talks and lunch. Even though we can now tweet the latest VDW numbers, STOC still serves this purpose. Another (likely unintended) purpose of STOC is to give researchers a quick yet prestigious way to publish. Hiring committees and Tenure committee's DO ask questions like How many STOC/FOCS publications does she have?. Some people think this is an awful system since these papers are not refereed carefully. I am not going to debate that here. My only concern is making STOC better at spreading knowledge.

What are some of the problems with STOC?

  1. People don't want to serve on the program committee since its a lot of time and they can't submit. The two-tiered system used for STOC 2013 seems like a good solution to this.
  2. Referees Reports (can we even call them that?) are often not very informative. The two-tiered system COULD help this since each committee member has less work and there is a small oversight committee. Another solution that some conferences use is to give the authors a chance to rebut a report and/or rewrite the paper. I'll discuss this more in the next point.
  3. Since the reviewing process is rushed there have been papers that are just plain WRONG. This can be confusing for someone coming to the literature. Also there are throw-away- comments like This can easily be extended to the case of weighted graphs. where this is not easy at all. How big a problem is this? How much worse than Journals is it? I DON"T KNOW. Would the Rebut/Rewrite help this? PRO: Referees don't have to decide RIGHT NOW what to do and can ask the authors things? CON: More back and fourth, more work. CAVEAT: This might make STOC more like a journal with fast turn-around time.
  4. Some of the papers never get into Journal Form. Again Rebut/Rewrite may help in that the STOC version is better, but this is more giving in to the problem rather than solving it. Demanding full versions of papers (now possible since with e-proceedings page limits are less of an issue) is a good idea (and I think IS being used now by STOC).
  5. Many good papers get turned down. Going to three parallel sessions would help this. There may be logistical problems here, but I think this is a good idea. Are there enough good papers to make this work? I think so- and the committee would have the freedom to NOT use all the sessions in case there aren't quite enough papers. I do not think this would make STOC's prestige decline.
  6. It has been said that only narrow technically hard stuff gets in and not simple short new ideas. Its hard to know if this is really true. But in any case the three-parallel sessions may help this since there would be room for diff types of papers.
  7. Personally I get more out of the workshops and invited talks then out of the refereed talks. Hence I would like more of those. Posters are good also. More to the point- I would like more VARIETY in whats at a conference since people get knowledge in different ways.
  8. Can you really communicate your latest and greatest result in a 20 minute time slot in a crowded room where the adjacent bathroom is out of order? Even though we've made great advances in technology (I call PowerPoint PROGRESS but some disagree) and in plumbing (in the old days STOC people had to use an outhouse- do young people even know what an outhouse is anymore?), is there a better way to do this? It was suggested that ALL talks be POSTER sessions (NIPS does this). This should NOT be viewed as inferior or demeaning so long as we still have published proceedings (whatever that means in the days of arXiv) and high standards. The only relevant question is: Would posters be a better way to convey results? I DO NOT KNOW, but I think it would be worth trying out.

So in summary I want to see (1) more workshops, invited talks, and student posters, (2) Full papers in the proceedings, (3) two-tiered program comm. (4) either go to three parallel sessions or have posters. Some of these could be combined-- like a workshop on max flows, and them posters on the max flow papers that got in. The rebut/rewrite I am more ambivalent on but that may also be a good idea. These ideas are NOT radical (and not even original) and it is NOT my purpose to drain STOC of its prestige. Whether that is a good idea is another debate.

Thursday, June 06, 2013

Complexity Typecast

Lance: Welcome to another exciting typecast coming from sunny Stanford University. I'm with Bill at the 28th Conference on Computational Complexity. Hi Bill, I see you're now at Mizzou.

Bill: Yes, my name tag says Univ. of Missouri but the body is still at Maryland. But Missouri is the Show-Me State and I don't believe theorems until you show me the proof.

Lance: Interesting name tags going around. Joshua Brody is at the University of Aarhus in Windsor, Vermont. But outside the name tags, this has been a well-run meeting.

Bill: Indeed. So Lance, anything seem different this year.

Lance: I'm noticing a few trends at both STOC and Complexity. Both have strong attendance this year, especially for West Coast meetings, and more papers than usual. Though fewer women attendees and I'm not sure why.

Bill: I was at a computability meeting at Iowa recently where there six women but they were the same six women from twenty years ago. That does not bode for the future.

Lance: I think that says more about computability as I'm guessing all the attendees were there twenty years ago.

Bill: I resemble that remark. Let's talk math. At one you were a Kolmogorov skeptic, now you are a believer.

Lance: Yes once I actually used it for a theorem I saw the light.

Bill: Speaking of light, are you now a quantum believer? Ronald de Wolf gave an awesome talk on the applications of quantum techniques to classical theorems. Were you convinced?

Lance: There are times that thinking quantumly can help generate good theorems. Nice to see quantum is good for something.

Bill: They didn't have quantum back in the days of the first complexity conference in 1986.

Lance: Yes the world was classical back then, just like the world was flat in 1400. No one here remembers that complexity meeting in 1400 but I have seen a few people from the original 1986 meeting.

Bill: Besides us, Eric Allender, Jonathan Buss, Steve Homer and Osamu Watanabe. Whether we remember anything from those days...

Lance: I remember meeting you for the first time. I was just a first-year grad student and I walked into a room with you and David Barrington talking at light speed. I thought you were both so smart.

Bill: Sorry to disappoint you. So Eric is the last man standing?

Lance: Yes, since I missed complexity last year, Eric Allender is the only person to have attended all 28 Complexity meetings. He does not want that to be his claim to fame.

Bill: Back in '86 I could follow 2/3 of 3/4 of the talks. Now I can follow 1/8 of 1/4 of all the talk. Have I gotten dumber or have the talks gotten harder?

Lance: Yes.

Bill: Thank you Lance, how about you?

Lance: Yes. More the techniques are quite different than the more computability type tools we used back in the day.

Bill: A field must change or die. I'm glad we're changing unlike certain areas of math I will not mention.

Lance: Speaking of change...

Bill: There are 242 ways of changing a dollar into pennies, nickels, dimes and quarters.

Lance: I did not know that! Moving on, what do you think of Joan Feigenbaum's suggestions on changing STOC?

Bill: I'll do a blog post on this later, but I'm generally in favor of more people on the PC (2-tiered), more papers in conference (3 parallel sessions) and more workshops, invited papers and posters.

Lance: So Bill ready to wrap it up?

Bill: Yes, it's time.

Lance: So remember, in a complex world best to keep it simple. And buy my book.

Tuesday, June 04, 2013

STOC is Burning

Bill and I are in Palo Alto this week for the co-located meetings of STOC and Complexity. In a new ACM policy, the STOC 2013 papers are freely downloadable by all for the next month. Check out the best papers and best student papers.

Last night smoke from a burning car preemptively ended the STOC business meeting.

Photo from Moritz Hardt

Before the fire I live tweeted the business meeting. In short, a possible record attendance for a west coast meeting (364), one less than New York last year. Next year's STOC will be in the same hotel in New York. A record number of accepted papes (100). PC chair Joan Feigenbaum talked about her two-tiered committee and several potential experiments for future STOCs (eliminate proceedings and just point to Arxiv papers for instance). Read her blog interview for more. 

Lane Hemaspaandra received the SIGACT distinguished service prize for running the SIGACT News complexity column. Gautam Kamath won the STOC 2012 best student presentation award. No award this year because there aren't videos for the talks.

Gary Miller gave the Knuth Prize lecture. He talked about new techniques for solving systems of equations based on graphs that has many applications including new almost linear time algorithms for approximating undirected max flow. 

More from Palo Alto later this week.