Monday, March 02, 2015

Leonard Nimoy (1931-2015)


Bill and I rarely write joint blog posts but with the loss of a great cultural icon we both had to have our say.

Bill: Leonard Nimoy (Spock) died last week at the age of 83. DeForest Kelley (McCoy) passed away in 1999. William Shatner (Kirk) is still alive, though I note that he is four days older than Nimoy.

Spock tried to always be logical. I wonder if an unemotional scientist would be a better or worse scientist.
Does emotion drive our desire to learn things? Our choice of problems to work on? Our creativity?

Did Star Trek (or its successors) inspire many to go into science? Hard to tell but I suspect yes. Did it inspire you?

There depiction of technology ranged from predicative (communicators are cell phones!) to awful (Episode 'The Ultimate Computer' wanted to show that humans are better than computers. It instead showed that humans are better than a malfunctioning killer-computer. I think we knew that.) I think TV shows now hire science consultants to get things right (The Big Bang Theory seems to get lots of science right, though their view of academia is off.) but in those days there was less of a concern for that.

Lance: I'm too young to remember the original Star Trek series when it first aired but I did watch the series religiously during the 70's when a local TV station aired an episode every day, seeing every episode multiple times. The original Star Trek was a product of its time, using the future to reflect the current societal issues of the 60's. Later Star Trek movies and series seemed to have lost that premise.

Every nerdy teenager, myself included, could relate to Spock with his logical exterior and his half-human emotional interior that he could usually suppress. Perhaps my favorite Spock episode was the penultimate "All Our Yesterdays" where Spock having been sent back in time takes on an earlier emotional state of the old Vulcans and falls in love.

I did see Leonard Nimoy in person once, during a lecture at MIT in the 80's. He clearly relished being Spock and we all relished him.

Goodby Leonard. You have lived long and prospered and gone well beyond where any man has gone before.

Thursday, February 26, 2015

Selecting the Correct Oracle

After my post last week on the Complexity accepts, a friend of Shuichi Hirahara send Shuichi an email saying that I was interested in his paper. Shuichi contacted me, sent me his paper and we had a few good emails back and forth. He posted his paper Identifying an Honest EXPNP Oracle Among Many on the arXiv yesterday.

Shuichi asks the following question: Given two oracles both claiming to compute a language L, figure out which oracle is correct. For which languages does there exist such a selector?

For deterministic polynomial-time selectors, every such L must sit in PSPACE and all PSPACE-complete languages have selectors. The question gets much more interesting if you allow probabilistic computation.

Shuichi shows that every language that has a probabilistically poly-time selector sits in S2EXP, the exponential analogue of S2P. His main theorem shows that EXPNP-complete sets have this property. His proof is quite clever, using the EXPNP-complete problem of finding the lexicographically least witness of a succinctly-described exponential-size 3-SAT question. He uses PCP techniques to have each oracle produce a witness and then he has a clever way to doing binary search to find the least bit where these witnesses differ. I haven't checked all the details carefully but the proof ideas look good.

Still leaves an interesting gap between EXPNP and S2EXP. Is there a selector for Promise-S2EXP-complete languages?

Monday, February 23, 2015

Eliminate the Postal Service

It's gotten very difficult to mail a letter these days. There are no mailboxes along my ten mile commute. Georgia Tech has contracted with an outside company to handle outgoing mail. To send a piece of mail requires filling out a form with an account number and many other universities have similar practices. Mail into or out of the university can tack on several days. I sent a piece of mail from Georgia Tech in Atlanta to the University of Pennsylvania in Philadelphia--two weeks from sender to recipient.

Why do I have to send mail in this world with email, texts and instant messages? Some places require "original receipts". Some government agencies require forms sent by mail or fax, and I've given up trying to find a reliable fax machine with someone who knows how to work it. It's still not always easy to transfer money to another person or company with a physical check. I stopped using the Netflix DVD service because it lost its value when I had to make a special trip to mail the DVD back. It's easier to find a Redbox than a mailbox.

Meanwhile most of the mail I receive is junk, or magazines, which look better on the iPad, or official letters that I have to scan to keep an electronic copy since they didn't email it to me. I do get the occasional birthday card or hand-written thank you note, a nice Southern tradition but we can live without it. USPS also does package delivery but that is often handled better by private provider such as UPS and FedEx.

So what if we just eliminated the US Postal System, say with a three-year warning? There is nothing that can't be replaced by electronic means and a planned closing would force the various government and businesses make that final push. We'll reminisce about mail like we did about the telegram. But why keep an inferior technology alive? It's time to move on.

Thursday, February 19, 2015

And the Winners Are...

[Shortly after this post went up, STOC announced their accepted papers as well]

I always like that the announcement of accepted papers for the Computational Complexity Conference happens around the time of the Academy Awards. These acceptances are the Oscars for our community that shares its name with this conference and the blog.

The title that interests me is Identifying an honest EXP^NP oracle among many by Shuichi Hirahara since it seems closely related to some of my own research. Not only cannot I not find the paper online, I can't even find the author's email. Shuichi, if you reading this, please send me a copy of your paper.

Luckily not all papers are so elusive. Murray and Williams show that proving the NP-hardness of computing the circuit complexity would require proving real complexity class separation results. Oliveira and Santhanam give tight lower bounds on how much you can compress majority so that you can compute it with constant-depth circuits. A different Oliveira has two papers in the conference, a solely authored paper showing that polynomials of low individual degree with small low-depth arithmetic circuits have factors similarly computable, and a paper with Shpilka and Volk on hitting sets for bounded-depth multilinear formula. A hitting set is a small easily and deterministically computable set that contains, for every such arithmetic circuit, an input with a non-zero output.

Many more interesting papers and you can see them all at the conference in Portland, this year part of the Federated Computing Research Conference which includes STOC, SPAA and EC, which now stands for Economics and Computation. My tip: book your hotels now, they fill up fast.

Tuesday, February 17, 2015

Stephan Colbert, Jon Stewart, Andrew Sullivan, William Gasarch

Stephan Colbert is leaving the Colbert Report

Jon Stewart is leaving the Daily Show

Andrew Sullivan is no longer blogging

Bill Gasarch has resigned as SIGACT News Book Review Editor

Where will Gasarch get his news from?

Where will Colbert-Stewart-Sullivan get their reviews-of-theory-books from?


Why am I stepping down? I've been SIGACT News book Review editor for 17 years  (just as long as Jon Stewart has been doing The Daily Show.)  That's enough (more than enough?) time. I want time to spend more time with my books and my family. I will be on sabbatical next year so I am generally cutting down on my obligations.

 I have enjoyed it, gotten to know some publishers, got more free books than I know what to do with. I haven't paid for a math or CS book in... probably 17 years.

While writing reviews is great, figuring out who reviews other books, getting them the books, getting the review from them, editing it all into a column four times a year, can get to be routine. Though I DO like reading the reviews.

Who will take over? I asked Lance who would be good and he said `someone old who still reads books'- so I asked Fred Green who agreed to take the job. I then had to get my files (of reviews, of who-owes-me-reviews, of which-books-do-I-want-reviewed, etc) in order to email to him. The usual- I wish I had cleaned up my files years ago so I could benefit from it.

The main PLUS of the job was that I got to read lots of books and learn about some fields.  As someone who would rather read a good book rather than produce a bad paper, the job suited me.

The main NEGATIVE of the job was seeing so many books that I WANT to review but either didn't have time to (and I usually KNEW that and had someone else review it) or found myself unable to (gee that books is harder than I thought!) leave my office for someone else to review.

The biggest change that Fred will encounter is e-books.Will publishers want to send out free e-books instead of hardcopy? Will reviewers want hardcopy? This is of course a very tiny part of a more profound conversation of what will happen to the book market once e-books are more common.


Wednesday, February 11, 2015

Richard Hamming Centenary

Richard Hamming was born a hundred years ago today in Chicago. He worked on the Manhattan Project during World War II, moved to Bell Labs after the war and started working with Claude Shannon and John Tukey. It was there that he wrote his seminal 1950 paper Error detecting and error correcting codes.

Suppose you send a string of bits where a bit might have been flipped during the transmission. You can add an extra parity bit at the end that can be used to detect errors. What if you wanted to correct the error? Richard Hamming developed an error-correcting code (now called the Hamming code) that encodes 4 bits into 16 codewords of 7 bits  each such that every two codewords differ in at least three bits (which we now call the Hamming distance). 

0000000 1101001 0101010 1000011

1001100 0100101 1100110 0001111

1110000 0011001 1011010 0110011

0111100 1010101 0010110 1111111

If there is a single error then there is a unique codeword within one bit of the damaged string. By having an error-correcting code you can continue a process instead of just halting when you detect a bad code.

The Hamming code is a linear code, the bitwise sum mod 2 of any two codewords is another codeword. This linear idea led to many more sophisticated codes which have had many applications in computer science, practical and theoretical.

Hamming received several awards notably the 1968 ACM Turing Award and the inaugural IEEE Richard W. Hamming Medal in 1988 given for exceptional contributions to information sciences, systems, and technology. Hamming passed away in 1998. 

Sunday, February 08, 2015

Pros and Cons of students being wikiheads

A wikihead   is someone who learns things from the web (not necc. Wikipedia) but either learns things that are not correct or misinterprets them. I've also heard the term webhead but thats ambigous since it  also refers to fans of Spiderman.

I like to end the first lecture of Discrete Math by talking about SAT and asking the students if they think it can be solved much faster than the obvious 2^n algorithm. This  semester in  honors DM I got the usual heuristics (look for a contradiction!) that may well help but certainly won't get down to much better than 2^n in all cases. This leads to nice discussions of worst-case vs avg-case and formal vs what-works-in-practice.

I also got the following answers:

SAT cannot be done better than 2^n since P  ≠ NP.

and

SAT can be done in O(n) time with a Quantum Computer.

They both made there assertions boldly! I gently corrected them. They had both read it on the web.

I suspect that the P ≠ NP person read something that was correct (perhaps a survey that said 80% of all theorists THINK P ≠ NP)  and misconstrued it, while the second person read something that was just wrong (perhaps one of those by the many worlds quantum theory a quantum computer can look at all possibilities at the same time people).

 SO- they went and looked up stuff on their own (YEAH) but didn't quite understand it (BOO)
or read incorrect things (BOO). But I will correct them (YEAH). But there are other people who will never get corrected (BOO). But there are others who will get interested in these things because of the false things they read (YEAH?) The quantum person might either NOT go into quantum computing since he thinks its all bogus now, or GO INTO it since he is now curious about what is really going on.

SO the real question is: if people get excited about math or science for the wrong reasons, is that good?bad? Do you know of examples where incorrect but exciting science writing lead to someone doing real science?





Thursday, February 05, 2015

Computers Turn Good and Evil

Entertainment Weekly proclaimed 2015 the year that artificial intelligence will rule the (movie) world with talk of the Terminator reboot, the new Avengers movie Age of Ultron, where Ultron is an attempt at a good AI robot turned evil, and Chappie who saves the world. And then there is Ex Machina, where Domhnall Gleeson "must conduct a Turing test, the complex analysis measuring a machine’s ability to demonstrate intelligent human behavior, while wrestling with his own conflicted romantic longing for the humanoid."

Let's not forget the return of the Star Wars droids and the hacker movie Blackhat that has already come and gone. On TV we have new computer-based procedurals, one for adults (CSI:Cyber) and one for kids (Buddy: Tech Detective).

With Wired proclaiming AI Has Arrived, and That Really Worries the World’s Brightest Minds and Uber investing in technology to eliminate their drivers, what is the average person to think. Let me quote the famous philosopher Aaron Rodgers and say "Relax". We still control the technology, don't we?