Thursday, July 28, 2011

The Problems of LaTeX

By request a post that may create the biggest backlash since I declared myself Unix free.

\begin{rant}
LaTeX is a great system for mathematical documents...for the 1980s. But the computing world changed dramatically and LaTeX didn't keep up. Unlike Unix I can't give it up. I write papers with other computer scientists and mathematicians and since they use LaTeX so do I. LaTeX still has the best mathematics formulas but in almost every other aspect it lags behind modern document systems.

WSYWIG: I love seeing the final document as I write it. There are front ends to LaTeX that approximate this but they produce LaTeX code that make it near impossible to collaborate unless everyone uses the same editor and we don't. "Code" is the right word, I have to compile a LaTeX document then start a separate program to see it.

Collaboration: The very reason I'm stuck with LaTeX is its greatest weakness. We all have different macros, style files and bibtex formats (and some don't use bibtex at all). We all have to agree in the beginning which of our homegrown stuff we want to use and merging already written documents is a bear. How often does some one send you a LaTeX document and you have to email back that they forgot some style file?

LaTeX documents are saved as text files which have different formats on different machines. I hate seeing ^M at the end of every line. Some people to break up LaTeX lines at reasonable places, other people don't messing up my editor and trying to figure out what my co-author has changed.. At least email attachments avoid the old ¿From problem.

Microsoft Word has a great system for tracking revisions. Google Docs lets people edit at the same time. Nothing close to either for LaTeX.

User Friendly: LaTeX is not user friendly. Try opening a text editor and (without looking at an old LaTeX document) write a LaTeX document to say "Hello World!" that will compile on the first try. Now go to Google Docs, create a new document and type "Hello World!". See the difference. Don't even get me started on creating a table.

Backward Compatibility: In the early 90's, LaTeX went through a major upgrade. There was a compatibility mode that claimed to be fully backward compatible. Not even close. Then they changed the font system, rendering my old documents unreadable. Luckily LaTeX hasn't changed significantly since then.

It's not difficult to convert between Word, Docs and most other document systems but nearly impossible to move to/from LaTeX.

I don't use LaTeX when I don't need to. I usually use Word or Docs for recommendation letters and other documents without much formulas and references including my upcoming P/NP book.

What we need is a way out of LaTeX, add-ons to Google Doc that make it as nice for math as LaTeX, and the ability to import old LaTeX documents, style files and bibtex files. Not holding my breath.
\end{rant}

Monday, July 25, 2011

Why did 1+1=2 take Russell and Whitehead 300 pages?

In my post about the myth that Logicians are crazy I mentioned in passing that Whitehead and Russell spend 300 pages proving 1+1=2 (but were both sane). Two people privately emailed me:
Are you sure Russell and Whitehead weren't a few axioms short of a complete set? How could they take 300 pages to prove 1+1=2. Isn't it... to obvious to be worth proving?
I responded by saying that they had to define 1, +, =, and 2 rigorously. One of them responded Are you a few limit points short of Banach space? That aside, there are some questions the 1+1=2 proof brings up:
  1. How did they spend 300 pages proving 1+1=2?
  2. Is it easier in ZFC?
  3. How important is or was Principia Mathematica? Wikipedia says PM is widely considered by specialists in the subject to be one of the most important and seminal works in mathematical logic and philosophy since Aristotle's Organon. The Modern Library places it 23rd in a list of the top 100 English-Language nonfiction books of the twentieth century. Here is the list they are referring to. The other books look... readable.
  4. I had thought that nobody reads PM anymore; however, its entry on amazon says it has a rank of roughly 294,000. This is far better than a book that truly nobody reads. For example this book has an Amazon rank roughly 5,300,000.
  5. While more people are buying it than I thought, are people actually reading it? Did they ever? My guess is no and no, but I really don't know.
  6. Can a book be influential if few people read it? Yes if they are the right people. Godel read it and I think it inspired him. (Its mentioned in the title of his Incompleteness paper.)
  7. PM was an early attempt to formalize all of math from the ground up. This may be one of those tasks that you are almost destined to do in a clunky way before doing it smoothly.
  8. I am talking in a vacuum here, having never read it. If any of my readers have actually read it and want to comment on what it was really like, you are more than invited to do so.

Thursday, July 21, 2011

Delay for a Postdoc

Suppose you have a tenure track offer at the University of Southern North Dakota and a postdoc offer at MIT. Tenure track jobs are hard to get so you want to accept the USND position but before you spend your life in Hoople you'd like some more time in a top research place.

So you ask the CS chair at USND if you can spend the next year as a MIT postdoc before going to USND. The chair needs you to teach algorithms that spring. Also if you don't take the job, he may lose the position to the music department. What are his choices?

  1. Say no, that you have to start this fall or not come at all. This runs the risk that you will not accept the USND position.
  2. Say yes and find someone else to cover algorithms. This has a different risk, that you might find some other job and not come to USND at all.
Is it ethical for you to say you are coming to USND in a year and send out new applications in the fall? What if you don't go outright looking for a job but Michigan asks you to apply? Is it ethical for Michigan to pursue you if it knows about your promise to USND? What if you fall in love with someone in Ann Arbor?

Someone I know (not CS) turned down an academic job she had promised to take. The school sent her a bill for $12,000 to cover the expenses of finding someone else to cover the classes. She didn't pay.

One economic solution: The chair agrees to the postdoc but requires you to pony up $12,000 now which you will get with interest when you start at USND. Trouble is most grad students don't have $12,000 to pony up and probably would walk away from a school making this offer.

This would be much easier if universities worked like baseball teams. In order to get you, Michigan could offer USND Seth Pettie and a grad student to be named later.

Wednesday, July 20, 2011

Looking for people to review books for my column

I am looking for reviewers for the following books for my SIGACT NEWS book review column. I will try to edit this post to keep the list up to date; however, if you are reading this past July 28 then you should look at the this list which I am constantly updating.
  1. Look here for advice for reviewers.
  2. Look here the LaTeX template.
  3. Look here for past columns.
IF you want to review a book then FIRST goto the advice for reviewers and see if you really want to review a book. If so then email me your name and postal address to send the book (if you are out of the country I will have the publisher send it to you). Your reward for doing the book review: a free copy of the book! I would like to get requests before July 28 since that is when the next column goes to press and I would like to have an updated books list there; however, if you ask me past that date, and the book is still available, you can still review it. (ADDED LATER- DEADLINE IS MID-OCT OR MID-NOV, BUT CAN BE NEGOIATED IF THERE IS A REASON FOR LATER.)

Bio Comp
  1. Introduction to Bio-ontologies by Robinson and Bauer.


Math
  1. The Dots and Boxes Game: Sophisticated Child's play by Berlekamp.
  2. New Mathematical Diversions by Martin Gardner.
  3. The Magic Numbers of the Professor by O'Shea and Dudley.


Books on Misc-Comp Sci
  1. Random walks and diffusion on graphs and database by Blanchard and Volcanic.

Monday, July 18, 2011

Disproving the Myth that many early logicians were a few axioms short of a complete set

While I was working on this post another blogger posted on the same topic here and I found a book review of Logicomix that touched on some of the same issues here. (For MY review of Logicomix see here.) They are very good sources and I will refer to them in this post.

There is a notion that logicians who work in foundations early on in the field were crazy. I give examples of where this is said and then I look at the real evidence.
  1. In Rudy Rucker's post about Turing he writes
    ... it really does seem possible that Turing killed himself. Like the other logicians Godel and Cantor, he seems to have been somewhat nuts. Funny how many logicians are crazy and irrational. A paradox.
  2. In Logicomix, a great comic book about the foundations of logic, there is an allusion to Logicians being crazy.
  3. In Gian-Carlo Rota Indiscrete Thoughts he writes: it cannot be a complete coincidence that several outstanding logicians of the 20th century found shelter in asylums at some point in their lives: Cantor, Zermelo, Godel, and Post are some.
I've also seen an explanation for this alleged phenomena: Logicians were searching for absolute certainly and either it can drive you crazy or thinking you can find absolute certainly means you were crazy ahead of time.

So the people above, and others, give some examples of logicians being crazy and then claim that many logicians are crazy. I am reminded of people who say It was cold the other day, looks like Global warming is wrong.

Let us look at the actual record. I will look at all of the logicians in Wikipedia's list of logicians who
  1. were born between 1845 and 1912. (1845 is when Cantor was born, 1912 is when Turing was born.)
  2. I ruled out a few people who were really philosophers, and also Banach who I don't think would call himself a logician.
For each logician on the list who meets my criteria I say if I think they are sane or a few axioms short of a complete set. I am not a historian--- corrections are more than welcome. I also freely admit that crazy is not well defined.

You may well disagree with what years I pick and my opinions. The point is to get an intelligent discussion going.
  1. Wilhelm Ackerman (1896-1962): He defined the function that bares his name. He also worked on the epsilon-calculus which formed the basis for Bourbaki's logic. Reading Bourbaki might drive one crazy; however, forming the basis for it does not. He was quite sane (Ackerman that is-- Bourbaki had multiple personality disorder.)
  2. Alice Ambrose (1906-2001): She had the longest lifespan of anyone on this list. She studied with Moore and Wittgenstein and got two PhD's. (In those days a women had to do twice as much as a man to get a job.) She was more on the philosophy side of logic, but certainly had math training. She wrote a textbook with her husband, known as Ambrose and Lazerowitz. Sane!
  3. Paul Bernays (1888-1977): He worked with Hilbert on alternative set theories. Sane!
  4. Evert Willem Beth (1908-1964): He helped to establish Logic as a discipline. Sane!
  5. L.E.J. Brouwer (1881-1966): He thought that all math should be constructive. This point of view lost the battle if ideas; however, that does not make him crazy. The Wikipedia article quotes Martin Davis as saying: he felt more and more isolated, and spend his last years under the spell of totally unfounded financial worries and a paranoid fear of bankruptcy, persecution, and illness. However, Dirk van Dal en wrote a scholarly two-volume biography of Brouwer that indicates that Brouwer was not crazy. And I agree. Sane!
  6. Georg Cantor (1845-1918): He had a new way of looking at infinity that was brilliant and is now accepted. That does not make him crazy. He was also convinced that Bacon wrote the plays of Shakespeare and that Joseph of Arimathea was the father of Jesus Christ. That does not make him crazy. However, he was obsessed with these views and was in and out of sanitariums. A few axioms short of a complete set.
  7. Rudolph Carnap (1891-1970): I originally thought he was more of a philosopher; however, he published in thermodynamics and the foundations of probability. He fled Hitler's regime and later refused to sign a loyalty oath in America (during the McCarthy Era). His second wife committed suicide. He led an interesting life but was sane.
  8. Alonzo Church (1903-1995): He invented (discovered?) The Lambda Calculus, proved that Peano Arithmetic was undecidable, and articulated what is now called the Church-Turing Thesis. These are all sane things to do. (Bob Soare distinguishes Church's Thesis from Turing's Thesis here.)
  9. Haskell Curry (1900-1982): He worked in combinatory logic. There is a programming logic named after his first name! (see here). Sane!
  10. Adolf Fraenkel (1891-1965): The F in ZF-set-theory. Provably Sane!
  11. Gottlob Frege (1848-1925) He hated Jews, Catholics, and the French. That might make him unpleasant to hang around, especially if you are a French Jew who converts to Catholicism. However, that does not make him crazy. He is often given as an example of someone who was crazy, though the links ( here and here) argues for Frege being sane. I defer to the two links. Sane!
  12. Gerhard Gentzen (1909-1945): He made the cut- Sane!
  13. Kurt Godel (1906-1978): He stopped eating because he thought people were trying to poison his food. They weren't. A few axioms short of a complete set.
  14. Jean Van Heijenoort (1912-1986): Best known in Logic for writing From Frege to Godel, a history of Logic from ... Frege to Godel (duh). Best known outside of logic for being Trotsky's secretary and later a historian of that movement. He was killed by his estranged fourth spouse. An interesting life, an interesting death, but he was sane.
  15. Jacques Herbrand (1908-1931) Has the shortest lifespan (died at 23 in a mountaineering accident) of anyone on this list. He worked in proof theory. Sane!
  16. Arend Heyting (1898-1980) He continued Brouwer's work on intuitionism. Sane!
  17. David Hilbert (1862-1943): In Logiccomix they claim that Hilbert's son Franz had a mental illness and Hilbert cut off all contact with him. However, this refutes this and claims that Hilbert's son was only put away for 3 years and then re-joined his family. One may question if David Hilbert deserves a World's Greatest Father mug, but one cannot question his sanity.
  18. Clarence Irving (1883-1964): He took exception to Principia's use of material implication. I'm impressed that he read and understood Principia enough to have objections. Sane!
  19. Stanislaw Jaskowski (1906-1965): He worked in Intuitionistic Logics. Since I can't prove that he was crazy I assume he was sane.
  20. William Ernest Johnson (1858-1931): He wrote three volumes on logic which showed technical expertise but was superseded by Principia Mathematica. This did NOT drive him crazy. Sane!
  21. Philip Jourdain (1879-1919): He was interested in paradoxes and formed the card version of the liar's paradox. He also worked on algebraic logic. Quite sane. His sister Eleanor Jourdain claimed to have traveled through time and seen ghosts, but was not a logician.
  22. Stephen Kleene (1909-1994): Kleene hierarchy, Kleene star, Kleene algebras are all named after him. He also proved the recursion theorem. Did this go to his head and make him insane? NO- he was totally sane.
  23. Christine Ladd-Franklin (1847-1930): Her PhD was on Algebra and Logic. She faced problems being a women in a man's field but kept her sanity.
  24. Stanislaw Lesniewski (1886-1939): He rejected axiomatic set theory (because of Russell's paradox) and tried to obtain other formal systems to replace it. A noble effort that failed. Still, he kept his sanity.
  25. Adolf Lindenbaum (1904-1941): He proved Lindenbaum's Lemma- every consistent theory of predicate logic can be extended to a complete consistent theory. Like many major advances, profound at the time, easy to prove now. Certainly sane.
  26. Leopold Lowenheim (1878-1957): The Lowenheim of Lowenheim-Skolem. See Skolem for more on that. A model of sanity.
  27. Jan Lukasiewicz (1978-1956) Wikipedia says He thought innovatively about traditional propositional logic. Is innovatively a word? My spell checker does not think so but whoever wrote his Wikipedia entry thinks so. Sane.
  28. Saunders Mac Lane (1909-2005) (He preferred the space between Mac and Lane.) His PhD thesis was on Logic and he also worked in Category theory. But he also did lots of Algebra. Sane.
  29. Carew Arthur Meredith (1904-1976): He worked on obtaining short axiom basis for logic systems. Sane.
  30. John von Neumann (1903-1957): Calling him a logician seems odd since he contributed to so many fields. Sane.
  31. Jean Nicod (1893-1924): Co-discovered the Sheffer Stroke from which you can do everything in prop logic. Sane.
  32. Pyotr Novikov (1901-1975): He proved the word problem for groups undecidable. His son Sergei Novikov won a Fields Medal in 1970 and, more importantly, is a professor at The University of Maryland! Sane.
  33. Giuseppe Peano (1858-1932): His Wikipedia entry calls him the founder of Mathematical Logic and Set Theory. That seems over-the-top, but not by much. His axiom system is still the standard. Sane.
  34. Emil Post (1897-1954) He introduced Turing Degrees. In the mid 1940's he posed Post's Problem which is to find a r.e. set (now called c.e.) that is neither decidable nor complete. This was solved in 1956 by Friedberg and Munhnik independently. He suffered from mental illness. A few axioms short of a complete set.
  35. Mojzesz Presburger (1904-1943): Presburger proved Presburger Arithmetic was decidable. What are the odds of that!? Sane!
  36. William Quine (1908-2000): He was more of a philosopher; however he did do some math. At Harvard he taught Symbolic Logic every fall for 50 years. That might drive some crazy; however, he was quite sane.
  37. Frank Ramsey (1903-1930): The paper where he proved what is now known as Ramsey Theory was titled A Problem in Formal Logic and solved a case of the Decision Problem. He regarded himself as a logician so we shall too. Speculation: He would be surprised at where his work lead to (combinatorics) and then pleased that it lead back to logic again : The Large Ramsey Theorem (see also here) and much work in the reverse mathematics of Ramsey's theorem".
  38. Raphael Robinson (1911-1995): He worked in Logic and Number Theory. He is probably best known for his work on tiling the plane. He married Julia Bowman (who changed her name to Julia Robinson) who was also a logician but born in 1919--- a little too late to be on this list. Having two academics in the same area get married might drive some crazy, but not them. Sane!
  39. J. Barkley Rosser (1907-1989): He strengthened Godel's incompleteness theorem. Sane!
  40. Bertrand Russell (1872-1970): He was obsessed with the quest for certainty; however, that does not make him crazy. He had several wives (not at the same time) and believed in open marriage. He was not crazy, just ahead of his time. Sane.
  41. Moses Schonfinkel (1889-1942): He worked in Combinatory Logic. By 1927 he was in a sanitarium. The only non-famous logician on my list who was a few axioms short of a complete set.
  42. Thoralf Skolem (1887-1963): He is best known for the Lowenheim-Skolem theorem: The notion that any consistent set of axioms has a countable model is very interesting--- One corollary: there is a countable model of the reals. Thinking about that might drive some crazy, but not him. Sane!
  43. Alfred Tarski (1901-1983): The Banach-Tarski paradox is crazy; however, Tarski was not. Sane.
  44. Alan Turing (1912-1954): He defined Turing Machines, though he didn't call them that. The story I had assumed was true is that the British Government made him take hormones (or something) to cure him of his homosexuality, and this drove him to suicide. But the story doesn't quite work with the timeline. He committed suicide a few years after he was forced to take drugs. Delayed reaction? Suicide for some other reason? Really was an accident? In any case, since his possible suicide is the only evidence that he was crazy I say Sane!
  45. Nicolai Vasilev (1880-1940): The originator of non-Aristotelian logics. Sane.
  46. Alfred North Whitehead (1861-1947): In Russell-Whitehead's Principia Mathematica they spend 300 pages proving that 1+1=2. This might drive some insane but not him. Whitehead was stark raving sane.
  47. Ludwig Wittgenstein (1889-1951): He gave away all his money and seemed to be a self-hating Jew. Odd yes, but he was sane. (NOTE- Scott Aaronson left a comment that argues that Wittgenstein should be classified as a few axioms short of a complete set. I believe his arguments (they are backed up by facts) and may later redo the stats at the end of this post.)
  48. Ernest Zermelo (1871-1953): The Z in ZF set theory. He disapproved of Hitler's Regime. Hardly crazy. Rota says that Zermelo was crazy but neither I nor this post have been able to find any evidence of this. Zermelo did spend time in a hospital for lung problems, which may have confused Rota.
So what to make of all of this?
  1. Cantor, Godel, Post and Schonfinkel were crazy. So we have 4 out of 48 were crazy. That's around 8%. This website claims that 6% of all people are crazy. So 8 seems high, but the sample space is pretty small. Conclusion: Same as the posts on the same topic referenced at the beginning: the notion that people in logic are crazy is not well founded. In addition, this post argues that the problems Cantor, Godel, and Post had were unrelated to their study of logic. (There was no comment on Schonfinkel.)
  2. AH- but Rota said that so many outstanding logicians were crazy. Since three of the four who I say were crazy were outstanding there may be a point here. One could look at who on my list was outstanding and see what percent of them logicians were crazy. However, determining who was outstanding is even harder than determining who was crazy, so I leave it to others to continue this work.
  3. There were some on the list that in my opinion were sane but others think were crazy: Brouwer, Frege, Turing, Zermelo. Perhaps more. If enough of them turn out to be crazy then there may be something to this logicians are crazy theme; however, I doubt this will happen.
  4. Was it crazy to spend so much time and effort on this one post? I am not on the logic list, nor was I born between 1845 and 1912 so the answer is not relevant to the study.
  5. This blog posting has a crazy number of links: 71. That breaks the record for this blog which was held by this entry which had around 37.

Friday, July 15, 2011

Math, the Universe, and Everything: Max Tegmark's Interpretation of reality (guest post)

(This is a guest post by Nadia Jones who blogs at online college about education, college, student, teacher, money saving, movie related topics. You can reach her at nadia.jones5@gmail.com. Why is she doing a guest blog? She asked me, pointed me to some of her work, and suggested some topics that seemed reasoanble. You can do that too!.)

Math, the Universe, and Everything: Max Tegmark's Interpretation of Reality

Despite how much mathematicians like to think that their work is the end-all, be-all, it can be sometimes quite complicated to explain to friends and family who are not familiar with the pleasures and perils of doing high-level math exactly why it is so important. At the same time, however, devout followers of math still realize that there is an element of get-your-head-out-of-the-clouds, especially when dealing with abstract mathematics that do not directly apply to career paths that are lucrative or have potential for advancing research in an academic setting.

But what if someone were to tell you that everything that we know, everything that we feel, is all a complex series of mathematical structures? Of course, many have theorized that math and disciplines that are heavily math-based like physics, are very accurate ways to describe the world as it exists, but cosmologist Max Tegmark takes things one step further.

In an absolutely fascinating article published in Discover Magazine, here Tegmark explains his theories that were almost impossible to publish or even be taken seriously a few years ago. Taking a leaf out of string theory's book, Tegmark has endeavored to explain what is known in popular science as "alternative" or "parallel" universes. In more serious academic circles, these universes are known as "multiverses." Tegmark notes that others have posited three multiverses, but he has added a fourthâthe mathematical multiverse. (Another article about Tegmark is here.)

Tegmark describes this particular multiverse as such in the Discover interview:
Galileo and Wigner and lots of other scientists would argue that abstract mathematics describes reality. Plato would say that mathematics exists somewhere out there as an ideal reality. I am working in between. I have this sort of crazy-sounding idea that the reason why mathematics is so effective at describing reality is that it is reality. That is the mathematical universe hypothesis: Mathematical things actually exist, and they are actually physical reality.


Although Tegmark's work has not received as much critical attention as other mathematician's work, it his is melding of physics, pure math, and cosmology that has helped his research be more seriously considered. For more information on Tegmark's theories, check out his MIT hosted website, The Universes of Max Tegmark

Thursday, July 14, 2011

A slightly diff take on Lipton's use of Ramsey's Theorem

This post takes an idea of Dick Lipton's in a slightly different direction.

ω(G) is the size of max clique in G.

Recall that, for all 0 < δ1 < δ2 < 1, the following problem is NP-hard: Given a graph G with the PROMISE that either ω(G) ≤ nδ1 OR ω(G) ≥ nδ2, determine which one it is. Output NO in the first case and YES in the second case. (Reference: David Zuckerman proved the result here by derandomizing a result of Johan Hastad that had as its conclusion ZPP=NP rather than P=NP. See here for Hastad paper.)

Note the following,
If ω(G) ≥ nδ2 then, by Ramsey's Theorem and the current bounds known on the Ramsey numbers, for ANY 2-coloring of the edges of G there will be a mono clique of size (0.5)*(δ2)log n. (Slightly larger values can also be obtained.)
This gives rise to the following POSSIBLE RTIME(nO(log n)) algorithm for the promise problem.
  1. Input G
  2. Do the following nAlog n times Or until you get a NO. (A is a constant that we pick later.)
    1. Randomly 2-color the edges of G
    2. Look for a mono clique of size (0.5)*(δ2)log n. (This takes nO(log n) time.)
    3. If you DO NOT find one then output NO and stop. (In this case you KNOW the answer is NO.)
    4. Otherwise try again.
  3. (If you got here then every coloring you tried had a mono clique of size (0.5)*(δ2)log n.) Output YES. (You are NOT SURE that this is correct.)
The algorithm clearly runs in time nO(log n). But does it work? And what should A be?

  1. IF ω(G) ≥ nδ2 then the algorithm will correctly say YES.
  2. IF ω(G) ≤ nδ1 then will the algorithm find a coloring that shows this? Or more precisely, will this happen with high prob?
In order to get NP in RTIME(nO(log n)) we need the algorithm to work for SOME 0 < δ1 < δ2 < 1. Hence we need the following CONJECTURE to be true:
There exists 0 < δ1 < δ2 < 1, A, and a function p(n) = nAlog n such that the following is true: For almost all graphs G with ω(G) ≤ nδ1 the prob that a rand 2-coloring of the edges will have a mono clique of size (0.5)*(δ2)log n is LESS THAN (1-p(n)). (ADDED LATER: Almost All Graphs means for all but a FINITE number of graphs.)
So, what to make of this?
  1. One MAY think the following; Since we DO NOT think that NP is in RTIME(nO(log n)) the conjecture is false. Perhaps it can be PROVEN false (unconditionally) using the techniques of Zuckerman or Hastad, or other techniques in the area.
  2. Is our confidence that NP is not in RTIME(nO(log n)) SO STRONG? We could at least TRY to prove the conjecture.
  3. One could put the correct Ramsey Number, and a good set of coin flips, into an advice string. This would yield an algorithm in DTIME(nlog n)/poly. A slighly weaker conjecture would suffice to prove this algorithm works.

Tuesday, July 12, 2011

FOCS Accepts

This list of FOCS accepts are out, with abstracts, with PDF links (via Kintali) and in Algorithmic Game Theory (Nisan) and Algorithms (Eppstein). The FOCS Conference will be held October 23-25 in Palm Springs.

Looks like a strong collection of papers. Lots of tight bounds. I'm a sucker for tight bounds because it means you have the right answer and less chance of follow-up papers.

Here are a few of the papers that caught my eye.

Randomness buys depth for approximate counting by Emanuele Viola. Buys you exactly two layers of AND-OR gates. Viola has another nice paper Extractors for circuit sources.

A Small PRG for Polynomial Threshold Functions of Gaussians by Daniel Kane. I like Gaussians because you can look at them at any angle and the distribution doesn't change.

Optimal bounds for quantum bit commitment by AndrĂ© Chailloux and Iordanis Kerenidis. Perfect quantum bit commitment is impossible but you can do better than classical. This paper shows exactly how well you can do.

Information Equals Amortized Communication by Mark Braverman and Anup Rao. A nice connection between information theory and communication complexity. I wonder if this can be tied into Kolmogorov complexity.

Thursday, July 07, 2011

Scooped by 400 years

This article claims that finding the area under a curve by dividing up the region into rectangles is helpful. Maybe they could take some sort of... limiting process where the rectangles get skinnier. Who knows- they may be able to get an EXACT formula for, say, the area under the curve y=x2 from x=0 to a. (The Wikipedia entry on the article claims that it rediscovered the trapezoid rule. See here.)

I was going to begin an April fools day post with the above; however since the article (which is real) is already out there and being criticized here, here, here, here, and here, I decided to not go that direction. I have a different angle.

How often do you get a result and then find that someone else already had it? Is this MORE or LESS common in the internet age? There are several competing forces:
  1. With Search tools its EASIER to find whats already out there.
  2. With email or various websites like stack exchange it is EASIER to ASK if something is already known.
  3. Some people are gathering up stuff and putting it on line making it EASIER to find- if you know where to look. (Examples: Website of all of Ronald Graham's papers, or Website of some number theory related to VDW stuff). I thought I found a website of secret sharing papers and when I tried to find it again I hit several sites about different kinds of secrets.) ADDED LATER: updated site for Ron Graham's papers: here, and list (but not links) of papers on secret sharing are here.)
  4. Areas are getting more specialized and each use their own terminology making it HARDER to know if what you have done is original.
  5. There is so much is out there that it is HARDER to see whats already been done.
  6. NOT everything is online. The material that is not online might be HARDER to find then they used to be.
There are other issues. What if your proof is cleaner but essentially the same? What if it truly is independent? What if people come from completely different motivations? What if you read a result, forget that you read it, and later think its yours (its not). I've seen all of these things things happen and in all combinations.

Tuesday, July 05, 2011

The Quantum Tivo

Chuck Klosterman writes on watching sports on tape delay and Jeff Ely follows up. I take a quantum mechanics view: A sporting event saved on my Tivo is like Schrödinger's cat: Until I watch the game or otherwise learn the result, the outcome hasn't been determined. There are multiple worlds with different game outcomes and until observed we do not know which world we live in. There is no physical difference between watching an event live or delayed. If I get partial information on the outcome, like a half-time score, then the outcome is then just conditioned on this partial information.

We also have entanglement. Two people can be light-years apart watching the same gave on their Tivos. They will see the same outcome. The Tivo's are entangled. Even though in each person's view the outcome is random, they will be random in the same way.

That's where the analogy to quantum ends. Entangled Tivos can't explain Bell's inequalities.

Let me add a reason for watching on tape delay: The fast-forward button. If a game looks one-sided, I can speed up the game to see if it gets close again. And some sports (yes, soccer, I'm talking about you) are vastly improved with the entire game watched in double speed.