Wednesday, November 26, 2008

Blatant Plugs

We are looking for a postdoc to join the new CS Theory and Economics groups at Northwestern. We'll consider any theory candidates though prefer candidates with research connecting both areas. We're also looking to bring in several graduate students from all areas of theory. Apply here.

Over at TTI-Chicago (where I still am an adjunct professor), we are planning to hire a couple of research assistant professors (3-year non-tenure-track) and possibly a tenure-track position as well in theoretical computer science.

This blog would be remiss in not mentioning the postdoc opportunities for complexity theorists at the new Center for Computational Intractability based in Princeton. But why be in New Jersey when you can live in the land of Obama?

The call for papers for both the 2009 Conference on Computational Complexity and the Conference on Electronic Commerce are up. Both have deadlines slightly after the STOC notification date in early February.

Midwest Theory Day will be held Saturday December 6 at Northwestern, first time at Northwestern since 1993.

And while I'm plugging, don't forget to send your complexity journal submissions to the ACM Transactions on Computation Theory.

Bill and I are off for Thanksgiving. Have a great holiday and we'll see you on Monday.

Tuesday, November 25, 2008

The Flame

I deposited a check today at a new Chase ATM. You just put the check directly into the machine, the check gets scanned and the amount verified. Saves me a small amount of time but more importantly nobody at Chase ever has to look at the check. The computer has completely taken over.

On this theme we just rented Wall·E, another Powerful-Aware-Evil computer movie, though at least this time the main protagonists are also machines. The movie scared my daughter when she saw it last summer and I had to give her "you need to be in control of technology instead of having technology control you" talk.

In the movie, Wall·E uses a Zippo-style lighter and creates a flame. When I took a computer graphics class back in 1984 the professor presented a slide show of the state-of-the-art in computer graphics. The last shot was a photo of a flame because a good computer flame was unobtainable at the time.

One generation's "holy grail" is a new generation's "take it for granted".

Monday, November 24, 2008

Should tables be sorted? Yes but--- a question

In Yao's paper Should tables be sorted he did the following (I am paraphrasing alot).

A (U,n;q)-WPDS (Word Probe Data Structure) for MEM is the following;
  1. A function that does the following: Given A\substeq U, |A|=n, return the elements of A in some order. Think of them as being in CELL[1],...,CELL[n].
  2. (Assume Step 1 has been done so there are elements in CELL[1],...,CELL[n].) Given u\in U we want to know if u\in A. We have a query algorithm that asks q adpative queries of the form What is in CELL[i]?, and then outputs YES or NO. If YES then u\in A, if NO then u\notin A.
Thoughts, results, and a question I really want someone to answer.
  1. I call it a WBDS to distinguish from the bit probe model.
  2. The obvious appraoch would be to SORT the elements of A and use binary search. Hence q=log(n+1).
  3. For some cases where U is small compared to n, there are constant-probe algorithms.
  4. KEY: Yao showed that if U is large compared to n then sorting IS the best you can do. How big? Let R(a,b,c) be the RAMSEY number associated to a-uniform hypergraphs, b-sized monochromatic sets, and c colors. Yao showed that if U\ge R(n,2n-1,n!) then sorting is optimal.
  5. QUESTION: Is a lower value known? I have looked in various places but couldn't find a lower value. However, this may be a case where I'm better off asking an expert. I hope one is reading this blog!.
  6. This is one of the first applications of Ramsey Theory to computer science. It may be the first, depending on how you defined application, Ramsey Theory, and computer science.

Friday, November 21, 2008

What is your best paper? Ambigous!

I sometimes ask people
What is your best paper?
This question is actually ambigous. Here are criteria that you can use.
  1. Most important to the public. E.g., a big breakthrough people care about.
  2. Most important in your view E.g., a big breakthrough on a problem that you care about, but perhaps others do not.
  3. Most citations. This might be the same as important to public but may take more time to be evident as such. This one is verifiable!
  4. You are personally attatched to it ind. of what the public things. E.g., you are delighted that it used theorms in p-adic cohomology to solve a problem in number theory, but nobody else cares.
  5. You had a big contribution to it.
  6. You liked how it evolved.
Hence, if someone asks what your best paper is, ask them to clarify the question. Or give them 5 answers. ~

Thursday, November 20, 2008

Is the Medium the Message?

Do you care about fonts?

In other words do you care about how your paper looks? Not the details of the proofs or the quality of the exposition but how the paper looks. The fonts, the margins, the spacing, the neatness of it all.

I used to care.

I was in graduate school when the great shift to LaTeX happened. All of a sudden our papers looked great, like finished journal versions right off the printer. I would spend hours on my papers and months on my thesis, making sure there were no overfull boxes, that equations lined up nicely, pagebreaks occurred at good places and hyphenations were done right. Didn't worry about fonts back in those days when we thought Computer Modern looked good.

Now I don't bother. I still fix the big ugly problems but who really cares if "nondeterministic" is hyphenated properly. As you can see from this beautiful green blog, I don't try that hard on the medium. Though the style of Bill's posts can sometimes make me look like a true artist.

Truth is you get little value in our field from looking good. So better to spend the time proving new theorems than putting the old ones in a pretty font.

Wednesday, November 19, 2008

Reflection on the old days- by Joseph Kruskal

Guest Blog by Joe Kruskal. He is the Kruskal that did the MST algorithm, the Kruskal Tree Theorem and work on multidimensional scaling. However, this post is not on any of those topics. Its a response and reflection on my post about the monotone subequence theorem.

HIS POST:

In your post on the monotonic sequence theorem you said the following.
In those days it was harder to find out if someone else had done what you had done since they didn't have google, but it may have (in some cases) been easier since there were so many fewer researchers- you could just call them. OH- but long distance was expensive back then.
Yes, long-distance phone calls were expensive then. That's why mathematicians seldom used phone calls for that purpose. They used mail -- postal mail, of course. Now that email has become almost universal, and is seen as slow and stodgy compared with text messaging and other modes of communication that I haven't kept up with, people have no real idea what communication was like 50 years before.

The same thing was true 50 years ago. We didn't know then how communication was done 50 years before that. In England, at least, it was quite common for a well-to-do person to send a letter to a friend to propose having dinner out together, or going to a play together, or lots of other possibilities. They would expect to get a reply within say 4 hours, time enough to send another message confirming the arrangement for that evening.

In London at least, there were 4 deliveries/pickups per day, at least for the upper classes. When my wife and I visited England in the 1950s and stayed with my sister who had moved there with her British husband, we personally observed the following, which we had been told about. When a post office mail person come to the red "post box", which displayed the pick up times, he stood there waiting until the specified pick up time, to the minute (by his watch). Only then did he open the box and take out the letters and post cards. Everyone relied on the displayed pickup times, and would hurry to the box just in time, knowing that if they got there by the posted time the mail would go out right away. Watches were not so accurate then, so I imagine that the post office pick up people checked their watches against Big Ben or other large public clocks.

My own dissertation also indicates how things had changed:

Paul Erdos was telling lots of people about a conjecture due to a Hungarian mathematician, Vazsonyi, he was friendly with who he said "had died", meaning that he left mathematics for a well-paying job with some company -- I think it was an airplane manufacturer. I was one of many people who heard him describe this conjecture. Roughly a year later, I had put a lot of work into this problem, but was still not close to a solution. By chance I bumped into Erdos at the Princeton Junction station. We chatted. I don't know how the conversation turned to the Vazsonyi conjecture -- probably I told him I had been working on it. He said, oh, you must read a recent paper by Rado (a British mathematician, also from Hungary). I quickly went to the library and found his paper, which I read with fear and trembling. Had I been scooped? It turned out that he had made significant progress, but hadn't cracked the nut. His work combined with mine finally led to a solution.

Today, the equivalent of those two chance conversations can happen via Google and email. I feel certain that science of many kinds is developing much more rapidly than it used to for this reason (except, perhaps, in fields where progress is kept secret for reasons of financial gain).

Tuesday, November 18, 2008

Secrets from the Future

One of my crypto students pointed me to the "Nerdcore Hip-Hop" group MC Frontalot. Here's a trailer for their tribute film, Nerdcore Rising.

You can download their song Secrets from the Future that makes a good point about the life of crypto. From the (slightly explicit) lyrics

Best of all, your secret: nothing extant could extract it.
By 2025 a children's Speak & Spell could crack it.

You can't hide secrets from the future with math.
You can try, but I bet that in the future they laugh
at the half-a**ed schemes and algorithms amassed
to enforce cryptographs in the past.

I don't expect a general way to break RSA or factor numbers either on classical machines (for lack of algorithms) or quantum machines (for lack of controlled entanglement) in the next couple of decades. Nevertheless you can't count on say a 1024 or 2048 bit RSA key being safe in a decade or two. Better algorithms combined with faster highly parallelized machines may break those codes. Or maybe they won't—but you can't be sure.

Even the NSA gives expiration dates on encrypted data. If you used 100,000 bit keys your secrets should survive into the next century. But you have to wonder—how dark are your secrets that you need them to last?

Monday, November 17, 2008

A "well known" theorem

In the excellent paper On the power of two, three, and four probes
It is well known that every graph with s vertices and at least 2s edges contains a cycle of length at most 2log s
My students puzzled over this one in two ways. (1) How to prove it? Two of them came up with correct proofs that were not that hard. (2) Is it really well known? Two of them searched the web for a proof. They could not find one.

The problem with the phrase It is well known that is that it may be well known to people who know it (duh) but not to others. People not in the know don't even know if its hard or not (its not). Perhaps they should have said It is easy to show that. Or give a hint to the proof.

I invite my readers to give proofs to see if they differ from my students, and also so that the next time someone needs to reference this, they can point to this blog and attibute it to some cute alias.

A student asked me if giving as a reference a blog site or an unpublished paper on line is legit. I would say its more legit then giving as a reference a paper that is not on-line. A paper that is refereed but not online had a few people look at it closely. A paper that is unrefereed but online might have a lot more people look at it (then again, it might not). But since the reader can access it, he or she might be able to tell for himself or herself whether the statement they need has been proven properly.

Friday, November 14, 2008

Monotone Sequence Theorem: literature thing

Sometimes the literature gives a proof, and then a reference, but the reference is not really to that proof. Here is an example. The theorem in question is the following:
For any seq of n2+1 distinct reals there is either a dec subseq of length n or an inc subseq of length n+1.
In Proofs from the book the authors attribute the following argument to Erdos-Szekeres (paraphrased):
Let the seq be a(1),...,a(n2+1). Associate to each a(i) the number t(i) which is the length of a longest inc subseq starting at a(i). If there is an i such that t(i) &ge n+1 then we are done. If not then the function mapping a(i) to t(i) maps a set of size n2+1 to a set of size n. Hence there is some s such that n+1 elements of the seq map to s. These n+1 elements form a dec seq.
The actual proof in the Erdos-Szekeres paper is this (paraphased):
Let f(n) be the minimum number of points out of which we can select n inc or dec subseq. We show f(n+1) = f(n)+2n-1. Let a(1),...,a(f(n)+2n-1) be a sequence of distinct reals. Out of the first f(n) of them there is an inc or dec subseq of length n. Call the last elements of that subseq b(1). Remove it. (There is now a seq of length f(n)+2n-2.) Repeat the process to obtain b(1), b(2), ..., b(2n). Let A be the b(i)'s that are from inc subseq. Let B be the b(i)'s that are from dec subseq. It is easy to show that A is itself a dec subseq and that B is itself an inc subseq. If A or B has n+1 elements then we are done. The only case left is that A and B each have n elements. Let a be the last element in A and b be the last element in b. It is easy to show that a=b. But one of them was removed before the other, so this is impossible.
Thoughts:
  1. I suspect that in talks Erdos gave he did the proof now atributed to Erdos-Szekeres and hence people naturally assumed that this is the paper it was in.
  2. I am surprised that Proofs from the book gave this proof. There is, IMHO, a better proof. I do not know whose it is.
    Let the seq be a(1),...,a(n2+1). Map every 1 \le i \le n2+1 to (x,y) where x (y) is the length of the longest inc (dec) subseq than ends with a(i). It is easy to show that this map is 1-1. If all of the x,y that are mapped to are in [n]x[n] then the domain is bigger than the range, contradiction.
  3. Erdos-Szekeres first proved this theorem in 1935. Martin and Joseph Kruskal proved it 15 years later without knowing that Erdos-Szekeres had proven it; though by the time Joesph Kruskal published it he knew. I have gathered up 5 proofs of the theorem that I know here.
  4. In those days it was harder to find out if someone else had done what you had done since they didn't have google, but it may have (in some cases) been easier since there were so many fewer researchers- you could just call them. OH- but long distance was expensive back then.

Thursday, November 13, 2008

My New Hybrid

My younger daughter was deeply moved by An Inconvenient Truth and has become a staunch environmentalist ever since. She pushed me to change our lightbulbs as well as get a hybrid car as my next car. I usually don't let my kids influence our major purchase decisions but it's hard to argue with them when they're right.

So I got a Toyota Camry Hybrid about two weeks ago. When I ordered the car back in September I calculated I would make up the extra cost in about four years, but that was back when gas was $4/gallon and before Toyota slashed prices on their non-hybrid Camrys. I was told I had a 4-6 month wait but I picked up one sooner that someone else backed off of, probably because of the economy.

I certainly get good gas mileage—I've driven 250 miles and still have half of my original tank. But I do notice several features that simply exist to make me feel good about getting a hybrid. Most cars have tank sizes so the car goes about 300 miles on a tank. They kept the large tank size in this car so I can better feel the gas mileage. There is an MPG dial next to the speedometer, a fancy display showing arrows as energy gets transferred between the gas tank, engine and batteries and when I turn the car off it gives me an "Excellent" when I had good gas mileage. I wonder what I do wrong when I don't get the Excellent.

The neatest feature has nothing to do with being a hybrid. I never have to take the keys out of my pocket instead the car just senses them. The doors and the trunk just automatically unlock when I open them and I just press a key to start the car. All keys should work this way.

What's missing in cars these days is active Internet connectivity. I can easily think of many uses: Updates to maps, traffic, weather, local information, music streaming and VOIP. I'm sure there are many more possibilities people will come up with once a system is in place.

Oddly enough this will probably be my last hybrid, my next car will run solely on batteries or some other technology. That's life in the fast lane.

Wednesday, November 12, 2008

The Kakeya Conjecture over finite fields resolved! Why can't we resolve P vs NP?

Recently the Kakeya Conjecture over finite fields (henceforth Kakeya) was resolved. For information on what Kakeya is and how it was solved see Dvir's paper On the size of Kakey Sets over Finite Fields that solved it or a nice entry on Terry Tao's blog. Some Applications of the techniques are in the paper by Dvir and Wigderson Kakeya Sets, new mergers and old extractors.

Fields Medal Winner Terry Tao and others worked on Kakeya. There were partial results with difficult proofs. But the final proof was easy to understand. This does NOT mean it was easy to generate- we all suspect verifying is easier than generation. How easy was the proof to understand? So easy that I saw and understood a talk on it in seminar and was able to reconstruct it while proctoring an exam.

Whenever I see a math problem solved in a way that is easy but had been missed I wonder: is there an easy solution to P vs NP that is eluding us?
  1. Kakeya did not have a community of people working on it. Even though the people working on it were brilliant there were not that many of them. To solve a math problem may require a diversity of viewpoints. P vs NP has alot of people interested in it. Are they working on it? Do they have a diversity of viewpoints? I honestly don't know.
  2. There had been partial results on Kakeya. Hence there was hope. At this time there has been very little progress on P vs NP. (I would welcome counterarguments to this.)
  3. There were no results saying such-and-such technique cannot be used to solve Kakeya. For P vs NP there are several such results: oracles, natural proofs, algebraization. (hmmm- is having three results really having several results?). Is P vs NP the only currently open problem in math where there has been considerable effort in showing what techniques won't work? There may be some others in set theory where the conclusion Ind of ZFC is a real option, but how about in fields of math where we expect to get a real answer?
  4. If P=NP (which I doubt) then a simple-to-follow-but-quite-clever proof that eluded us all is plausible. If P\ne NP then I doubt this would happen.

Tuesday, November 11, 2008

Barrington's Theorem

A few weeks ago Bill complained about how hard it is to get an electronic journal version of Barrington's theorem. Though Barrington's theorem was brand spanking new and exciting when I started graduate school in 1985 and it remains one of my favorite theorems, it doesn't often get taught in many graduate complexity courses these days. So let me talk about one of the great complexity results that, in some sense, lets you count in constant space.

Formally the theorem states that polynomial-size bounded-width branching programs have the same computation power as Boolean formulas. A branching program is a directed acyclic graph where all but two nodes are labelled by a variable name and has out-degree two: one edge labelled true and the other false. The other two nodes are the terminal nodes labelled accept and reject. There is a single root of in-degree zero. From the root, one follows a path following the true node if the labelled input variable is true and false otherwise and accepting or rejecting the input based on the terminal node reached. Layer the branching program based on the distance of each node from the root and the width is the maximum number of nodes in any layer.

Barrington showed that width-five branching program can simulate any Boolean formula and thus the complexity class NC1 which includes the majority function. But the branching program in any layer can only remember one of five nodes, less than three bits of information as it processes through the graph. Yet you still can tell if a majority of input variables are true. Amazing.

The key to Barrington's proof is that the group S5 (permutations on five elements) is non-solvable i.e., there are permuatations σ = (12345) and τ = (13542) such that στσ-1τ-1 is not the identity permutation. There is a simple write-up of the proof on page 60-61 of the Boppana-Sipser survey.

Some neat consequences of Barrington's theorem.

  • Regular languages are NC1-complete under projections of the inputs.
  • One can compute any language in PSPACE using not much more than a clock and constant bits of memory. (Cai and Furst)

Monday, November 10, 2008

STOC reminder- TODAY!/ NYU Theory Day

TWO ITEMS:

I) Reminder: STOC submission abstracts due today.




II)

                       New York Area Theory Day
                    Organized by:  NYU/IBM/Columbia
                    External sponsorship by: Google
                       Friday, December 5, 2008

The Theory Day will be held at
Courant Institute of Mathematical Sciences,
New York University, 251 Mercer Street,
Auditorium 109, New York.

                                Program

9:30   - 10:00    Coffee and bagels

10:00  - 10:55    Prof. Assaf Naor
                 Approximate Kernel Clustering

10:55  - 11:05    Short break

11:05  - 12:00    Prof. Joe Mitchell
                 Approx Algs for some Geometric Coverage
                 and Connectivity Problems

12:00  -  2:00    Lunch break

 2:00  -  2:55    Dr. Jonathan Feldman
                 A Truthful Mechanism for
                 Offline Ad Slot Scheduling

 2:55  -  3:15    Coffee break

 3:15  -  4:10    Prof. Yishay Mansour
                 TBA

For directions, please see
here
and
here
(building 46)

To subscribe to our mailing list,
follow instructions at
here

Organizers:
Yevgeniy Dodis   dodis@cs.nyu.edu
Tal Rabin        talr@watson.ibm.com
Baruch Schieber  sbar@watson.ibm.com
Rocco Servedio  rocco@cs.columbia.edu

=======================================================================

Prof. Assaf Naor
(New York University)

Approximate Kernel Clustering

In the kernel clustering problem we are
given a large n times n positive semi-definite
matrix A=(a_{ij}) with \sum_{i,j=1}^n a_{ij}=0
and a small k times k positive semi-definite
matrix B=(b_{ij}). The
goal is to find a partition S_1,...,S_k of {1,...n}
which maximizes the quantity

 \sum_{i,j=1}^k (\sum_{(p,q)\in S_i\times S_j} a_{pq}) b_{ij}.

We study the computational complexity of
this generic clustering problem which
originates in the theory of machine learning.
We design a constant factor polynomial time
approximation algorithm for this problem,
answering a question posed by Song, Smola,
Gretton and Borgwardt. In some cases we
manage to compute the sharp approximation
threshold for this problem assuming the
Unique Games Conjecture (UGC). In particular,
when B is the 3 times 3 identity matrix
the UGC hardness threshold of this problem
is exactly 16*pi/27. We present and
study a geometric conjecture of
independent interest which we show
would imply that the UGC threshold when
B is the k times k identity matrix is
(8*pi/9)*(1-1/k) for every k >= 3.

Joint work with Subhash Khot.

-------------------------------------------------------------------------

Prof. Joe Mitchell
(Stony Brook University)

Approx Algs for some Geometric
Coverage and Connectivity Problems
We examine a variety of geometric
optimization problems.  We describe
some recent progress in improved
approximations algorithms for several
of these problems, including the
TSP with neighborhoods, relay
placement in sensor networks,
and visibility/sensor coverage.
We highlight many open problems.

-------------------------------------------------------------------------

Dr. Jonathan Feldman
(Google)

A Truthful Mechanism for
Offline Ad Slot Scheduling

Targeted advertising on web pages
is an increasingly important
advertising medium, attracting
large numbers of advertisers and users.
One popular method for assigning ads
to various slots on a page (for
example the slots along side
web search results) is via a real-time
auction among advertisers who have
placed standing bids for clicks.
These "position auctions" have been
studied from a game-theoretic
point of view and are now well
understood as a single-shot game.
However, since web pages are visited
repeatedly over time, there are
global phenomena at play such as
supply estimates and budget
constraints that are not modeled
by this analysis.

We formulate the
"Offline Ad Slot Scheduling" problem,
where advertisers are scheduled
beforehand to slots on a web page for
portions of the day.  Advertisers
specify a daily budget constraint,
as well as a per-click bid, and may
not be assigned to more than one
slot on the page during any given
time period.  We give a scheduling
algorithm and a pricing method that
amount to a truthful mechanism
under the utility model where bidders
try to maximize their clicks,
subject to their personal constraints.
In addition, we show that the
revenue-maximizing schedule is not
truthful, but has a Nash
equilibrium whose outcome is identical
to our mechanism.  Our
mechanism employs a descending-price
auction that maintains a solution
to a machine scheduling problem whose
job lengths depend on the price.

Joint work with
Muthu Muthukrishnan,
Eddie Nikolova and
Martin Pal.

-------------------------------------------------------------------------

Prof. Yishay Mansour
(Google and Tel Aviv University)

TBA

-------------------------------------------------------------------------




Friday, November 07, 2008

A new logical fallacy

Those of us who have taught logic to students are familiar with some of the fallacies they make: (1) confusing OR with XOR (reasonable given English Lang use), (2) thinking that (a--> b) --> (b-->a) and (3) thinking that from

A1 AND A2 AND A3 --> B

you deduce something about the writer's opinion of A3.

This recently happened, though it wasn't a student. It was a commenter on this blog. In a recent blog I wrote about McCain's concession speech:
If he has that way the entire time he might have won (if he also didn't pick Palin and we didn't have the economic crisis and we were more clearly winning the Iraq War).
This is an IF-THEN statement. There is no logical way to deduce what I think of any of the premises. One of the commenters committed error (3) above:
A country is destroyed and half million people are killed, and yet the only thing you feel regret about is not "more clearly winning". Excuse me, Professor Gasarch, I never held hope for the humanity of US, but a comment like this from an intellectual in this country, just taught me how coldblooed the americans can be."
The commenter raised an interesting question: If a writer says A-->B then what can you deduce about the writers opinion of A?
  1. If the work is in Large Cardinals then likely the writer thinks that the Large Cardinal hypothesis is true. Note that we do not know this logically.
  2. In papers that prove things contingent on P\ne NP or that factoring is hard or the usual derand assumptions the authors thing the assumption is true. Note that we know this by sociology, not by logic.
  3. (I may be off on this one- if so please correct.) Non-Euclidean Geometry was started by assuming the Parallel Post was false, hoping to prove that that assumption was FALSE, and seeing what can be derived from it. Let A be For a line L and a point p not on that line there are an infinite number of lines through p that do not intersect L. Let B be The sum of the angles of a triangle are LESS THAN pi. When someone proved A implies B they may have thought that A was false.
  4. Pat Buchanan said (I am paraphrasing)
    If McCain had presented more ties linking Obama to Ayers and Wright then he would have won.
    While this could be a simple IF-THEn statement, given who he is we know that he things these ties are relevant. Keith Oberman may have expressed a similar sentiment differently:
    If McCain had presented more lies linking Obama to Ayers and Wright then he would have won.
    In this case we can tell what Keith Oberman thinks of the assumption.
  5. SO- if someone says A-->B then you can't really deduce what the speaker thinks of A LOGICALLY, but you can use other things he has said and his reputation to discern what he thinks of A. Reasoining from context and personality can be useful, though it is not as rigorous as we are used to. Students in a logic course should not use it.

Thursday, November 06, 2008

The Terminal Man

Michael Crichton passed away yesterday. His early novels and movies showing the potential dark sides of technology had a strong impact on me in my youth.

The Andromeda Strain deals with an extraterrestrial virus from a military satellite. The movie Westworld, written and directed by Crichton, is about a fantasy land where a gun-slinging robot, played by Yul Brynner, doesn't behave as he should.

The Terminal Man doesn't refer to someone about to die but direct connections between humans and computers. If I remember right people became addicted to stimulating themselves with these connections. Addicted to the network? You have to be kidding.

My favorite Crichton book is The Great Train Robbery about the meticulous planning and execution of a massive gold heist on a train in 1855. Not much technology but a very logical plot line. The movie, also written and directed by Crichton, not suprisingly follows the book quite closely.

I haven't enjoyed his later work as much. The mathematician Ian Malcomb in Jurassic Park comes off as a babbling philosophical know-it-all who happens to be always right. The ridiculous holographic database in Disclosure is just embarrassing. These later books often have gratuitous action scenes just so they might make better movies.

Nevertheless Crichton knew how to make technology very creepy. Even if these books were not quite that realistic they got the young me excited (and worried) about the power of technology and computers.

Wednesday, November 05, 2008

Random thoughts about the election

Random comments on the election.
  1. John McCain gave a nice concession speech and was excellent on SNL on Saturday. If he was that way the entire time he might have won (if he also didn't pick Palin and we didn't have the economic crisis and we were more clearly winning the Iraq War). However, I've heard that line before- Kerry, Gore, Dole also looked alot better after they lost. Are candidates overly managed and overly cautious? Does this work against them? (Obama seems to have escaped that.)
  2. Whenever a party loses it splits into factions: (1) We lost because we strayed from our true principles. They can split further on what the true principles are. Some of these may support Palin in 2012. (2) We lost because we are too isolated from the mainstream. Some of these may blame Palin for the defeat.
  3. Could any Republican have won the Presidency year? With an unpopular war and the economic crisis they would have needed a weak oponent and some distance from George Bush (Mitt Romney 2006 might have worked). Obama was strong competition and John McCain was seen (fairly or unfairly) as being just like W.
  4. Hillary-supporters had basically two arguments in the primaries: (1) Hillary has a better chance of beating McCain then Obama does. This has proven false. Or has it? better chance- how can that be quantified once the event has happened. (2) Hillary would make a better president then Obama. We will never know.
  5. Obama ran an excellent campaign. McCain ran a terrible campaign. McCain had current events against him (as Colbert has said the truth has a liberal bias). Here is a silly statement: Obama's victory is due 30% to his campaign, 40% to McCain's campaign, and 30% to reality. I just made up those numbers, but can that question be asked and answered?
  6. This is why I study math. Math has well defined questions that (for the most part) have answers (though we may not know them yet). But the question why did Obama win? can't really be asked rigorouly or answered definitively. One can even criticize how I phrased it- should I have asked why did McCain lose??

Tuesday, November 04, 2008

Election Day

I made a snapshot of the electoral markets map at 5 PM Central time Monday to compare to the actual votes. Meanwhile you can continue to watch the markets aggregate rumors and results in real time at electoralmarkets.com.

I started a couple of different posts for today but nothing seemed to fit. This is an exciting day for America and possibly the world. If I was young and foolish I'd head down to Grant Park tonight for the big rally (as some of my younger colleagues will do) but now just plan to be at home sharing the experience with my family. Go vote if you can and let's watch history get made!

Monday, November 03, 2008

Will fivethirdyeight.com use this poll?

I had my Graduate Class in Communication Complexity VOTE FOR PREZ in a secret ballot. The rules were: They could vote for anyone on the ballot in Maryland, (I supplied a ballot with all of the candidates in Maryland) OR write someone in OR write NONE OF THE ABOVE. Here are the candidates and how many votes they got.
  1. Obama-Biden, Democrat party: 11 votes
  2. McCain-Palin, Republican: 1 vote
  3. McKinney-Clemente, Green: 1 vote
  4. Barr-Root, Libertarian: 0 vote
  5. Nader-Gonzalez, Independent: 1 vote
  6. Baldwin-Castle, Constitution: 0 votes
  7. None of the Above: 1 vote.
Speculation:
  1. Usually the Republican does better than this, and McCain is one of those Republicans who (perhaps used to) appeal to moderates and even democrats. So why only one vote? (1) ``I used to like McCain before he turned rightward this campaign'', (2) My class actually likes Obama. In the past there were people who couldn't stand Kerry or Gore, so while they didn't like Bush, they voted for him anyway.
  2. Usually I get one or two Libertarian voters. Why note this year? Speculation: (1) They think of Barr are really being a republican, (2) They think Obama really does transcent party and ideology. (3) The libertarians are tired of having their votes not count. (4) They got confused by the butterfly ballot that I used.
  3. Our student body is somewhat liberal. Or maybe its just that the students at Univ of MD interested in applying Ramsey Theory to Communiation Complexity are somewhat liberal.
  4. One of my colleagues was amazed that McCain got one vote.
  5. Of the 15 people in my class, around 10 are from other countries.
  6. This could be bad news for Mccain. There is that classic saying: as goes Graduate Communication Complexity, so goes the nation.
  7. Is it foolish to look at a small poll from an obviously biased source? Perhaps yes, but compare that to the networks yammering on enlessly about small shifts here and there.