Wednesday, March 30, 2005

Sublogarithmic Space

A reader asks
I wanted to ask you something about the Savitch and Immerman-Szelepcsényi theorems that I saw you have in your weblog. In both thorems we have that s(n)≥logn. Why is that?
For any space function s(n), we also need to have a pointer to read every bit of the input and thus we'll have n2O(s(n)) possible configurations. For s(n)≥log n, we can safely ignore the extra factor of n but for s(n)=o(log n) this causes problems for directly using the proofs of Savitch and Immerman and Szelepcsényi.

Still many researchers have studied sublogarithmic space classes but these results tend to be dependent on the exact machine model and it's tricky to understand what they tell us about general computation.

Tuesday, March 29, 2005

A Non-Standard Post

Mike O'Donnell tells a story of talk he saw once where the speaker considered the following recursive program:
f(n) := output 0 if n = 0; output f(n-1) otherwise.
By straightforward induction one can show that for any natural number n, f(n) will halt in a finite number of steps. The speaker argued that if we take n to be a non-standard natural number (which is bigger than all the standard integers) than the program will never halt. Mike O'Donnell counters that it will halt, just in a non-standard number of steps.

Suppose we could prove P≠NP in the theory of arithmetic. I can create a machine M that solve SAT on standard formula φ using |φ|k time if k is nonstandard: Since k and thus |φ|k is greater than every standard integer, we have time to do exhaustive search. However there will be some nonstandard φ that M will fail to solve satisfiability in time |φ|k time, for whatever the satisfiability question for nonstandard φ means.

Now suppose P=NP in the standard model but P≠NP in some non-standard model (and thus the P versus NP question is independent of the theory of arithmetic). We have a standard machine M and a standard integer k such that M(φ) correct computes whether a standard φ is in SAT in |φ|k steps. But for some non-standard φ, M(&phi); would fail even though it gets to run in time nk for the nonstandard n=|φ|. Even if we allow M and k to be non-standard there will be some φ that M will fail to determine satsifiability.

You can keep playing this game and never get into trouble assuming the theory of arithmetic is consistent. But I get a headache when I try to think what non-standard Turing maching, non-standard polynomial running time and satisfiability of non-standard formula really mean.

Monday, March 28, 2005

Getting an Edge

Using performance-enhancing drugs has become a major issue in national and international sports, most recently in Major League Baseball in America where many major players have admitted (or refused to deny) using steroids to increase their power.

What about in academics? Supposedly some mathematicians have used amphetamines to get an edge or keep up with younger mathematicians. If we discover that a mathematician used such a performance-enhancing drug to prove a theorem would we take away his credit for that result? No, we wouldn't.

I don't have any direct evidence that any mathematicians or computer scientists actually use any performance-enhancing drugs, other than caffeine of course. Even so I doubt many of us would agree to random drug testing of academics. But then why should professional athletes be held to a different standard than professional academics?

Saturday, March 26, 2005

My Brother

I have a brother who worked in the music industry and got us first row tickets to Kool and the Gang. I have a brother who was an entertainment lawyer who appeared on Court TV and a morning show in Trinidad and Tobago. I have a brother who co-founded a popular fantasy sports website which was later bought out by CBS Sportsline. I have a brother who was shown in a bar celebrating the Red Sox during the World Series telecast last fall and a week later pictured in the New York Post with the trophy. I have a brother who created an award winning short film in New York and is now producing a movie in LA.

I have but one brother. Happy Fortieth Matt. Thanks for making my life seem so ordinary.

Thursday, March 24, 2005

P=NP and the Arts

A few years ago someone asked Steven Rudich, a complexity theorist at Carnegie-Mellon, why he thought P is different than NP. He replied "I can recognize great music but I can't create great music," the implication being that it's much harder to find a solution than to verify one.

But suppose NP-complete problems do have very efficient algorithms. Can we use them to create art, perhaps, as someone recently suggested to me, create a new Mozart opera, Shakespeare play or finish Schubert's symphony?

Perhaps we could use P=NP to find a small circuit that outputs "Shakespeare" plays. But these plays will only extrapolate from his known works. The program cannot add the new creative ideas Shakespeare puts in his works. It cannot create art just generate similar pieces.

Of course this whole exercise is moot since we strongly believe that NP-complete problems are hard. Even so a P=NP fantasyland might put some mathematicians and computer scientists out of work but true artists will still create in ways computers can never match.

Wednesday, March 23, 2005

Complexity LaTeX Package

From Chris Bourke: I've created a LaTeX package for typesetting complexity commands ($\P$, $\NP$, etc). Its available on CTAN. Not only does it define commands for classes (and languages, functions) but with just an option call to the package, you can specify the font all the classes are typeset in. I welcome feedback.

Tuesday, March 22, 2005

Tape Reduction

Given a k-tape Turing machine can we reduce the number of tapes without too large an increase in time and space (memory)? Not just an esoteric question, tape reduction plays an important role in time and space hierarchies and creating efficient reductions.

For space we have an easy result: Every k-tape s(n)-space bounded Turing machine can be simulated by a 1-tape machine in O(s(n)) space. Create a single supertape with separate "tracks" for each of the original tapes and add markers for the locations of the heads on each of these tapes.

For deterministic and nondeterministic time, this constructions yields a t2(n)-time 1-tape simulation of a k-tape t(n)-time machine and this is the best you can do (consider { x#x | x∈Σ*}). We can do much better if we reduce k tapes to 2 tapes.

We can simulate any nondeterministic t(n)-time k-tape machine in nondeterministic O(t(n)) time on a 2-tape machine. Roughly the simulation guesses every step of the transition function on one tape and uses the other tape to verify the transition function on each tape of the original machine one tape at a time.

For deterministic time we can only achieve a weaker result: Every t(n)-time k-tape machine has a 2-tape simulation using O(t(n)log t(n)) time. The proof (due to Hennie and Stearns) is quite involved and I won't give it here. The proof has a nice side effect: The construction creates an oblivious machine where the head movements depend only on the input length. One can use this fact to show that we can simulate any t(n)-time Turing machines with O(t(n)log t(n))-size bounded fan-in circuits and reduce any t(n)-time nondeterministic computation to satisfiability questions of size O(t(n)log t(n)).

Monday, March 21, 2005

Paul Fortnow (1937-1980)

My father passed away 25 years ago today. I thought I would share some of the lessons I learned from him.
  • Once you win an argument, stop arguing.
  • Always play to win. He would always take a handicap and play his hardest rather than dumb down his play particularly in chess, his favorite game.
  • When he went to college, the engineers measured their prowess in the number of digits of precision they could get from their slide rules. I guess he didn't get enough digits as he gave up engineering and eventually went into marketing.
  • The extra character in a movie that seems to serve no purpose is the one that committed the murder.
  • He said "When you grow up and drive you can choose the radio station." Like my daughters let me get away with that.
  • Nixon really was a crook.
  • Given enough money, there is nothing anyone won't do.
  • He meticulously taught me how to keep score in baseball as this is a skill every American should know. He then told me never to keep score as it distracts from the game.
  • The two course he regretted not taking in college were art and music appreciation. So I took art appreciation in my first year. Big mistake. For the record the two courses I wish I took were economics and mathematical logic.
Dad, if you are surfing the net from the great beyond know that I miss you and my family, including the daughter-in-law and granddaughters you never met, think of you often. And your Red Sox finally won the World Series.

Friday, March 18, 2005

The End of the Travel Season

In 2005 I have already traveled on five trips to four different countries on three different continents. My travel comes in bunches. I didn't teach in the winter quarter (at the cost of doubling up in the spring) so I planned much of my travel during this time. After I return tomorrow I have no major trips planned until STOC in late May.

Why do I travel? I don't enjoy at all the act of traveling despite the advantage of non-stop flights to nearly everywhere from Chicago. The tourism bit has long lost its allure. After a while all the cities and universities start to look the same.

I don't need to travel. I could hole up in Chicago, just work with the students and visitors and have a moderate research career. I have tenure so my job is safe in any case. So why travel?

  1. People: Working with people, talking with people, drinking with people. Traveling to visit different people keeps my research and academic life from getting stale.
  2. Getting Away: Like most people, I have considerable work and family responsibilities and travel allows me to escape and have time to focus on research. When I visit someone for a short time they will usually make time to work with me as well. The internet has prevented me from completely escaping but I can usually tell people I'm away and I will deal with the problem when I get back. I do try to keep to a goal of not leaving the family for more than a week at a time.
  3. Being Involved: If you want to be an active member of the community people need to know who you are. Don't travel and people will forget you. Email is not a good substitute for meeting face to face.
Traveling has many virtues but I am really looking forward to two straight months of going no where at all.

Wednesday, March 16, 2005

Bicycling in Holland

When I visit Amsterdam I rent a bike for much the same reason I rent a car when I travel to New Jersey. CWI is in the east part of the city and does not have great access via public transportation. A bicycle lets me get from the hotel to CWI quickly and also gives me easy access to the rest of the city.

Why does the Netherlands have such a strong biking culture? Most of the country is flat, the cities are compact (at least by Chicago standards) and the weather never gets too hot or too cold to bike. One can reliably commute by bike nearly every day of the year. The country has many dedicated bike paths and marked bike lanes on many major roads. Traffic laws greatly favor bikes over cars. This all gives positive reinforcement to bicycling in this country.

Bicycles here for the most part do not have hand brakes or multiple gears but are otherwise rather sturdy. Bicycle lights are required at night; a visiting complexity theorist got a ticket for not using his. Virtually no one wears a helmet. When we lived here on sabbatical we would put our one-year old daughter on a bike seat on the handlebars without a helmet—the norm for Holland but might have gotten us arrested back in the states.

Bicycle theft is a big problem especially in the cities. The general rule is to spend as much on the lock as you did on the bike. Locking up your bike requires knowing your topology; Get it wrong and you might lose a wheel or worse yet keep your wheel and lose the rest of the bike.

Most people in Holland don't bike for health or environmental reasons. Bicycling is often simply the easiest way to get from point A to point B.

Monday, March 14, 2005

What Makes a Good Collaborator?

This week I visit CWI in Amsterdam where I spent my sabbatical year eight years ago and continue working relationships with many people here especially Harry Burhman. Harry is my strongest collaborator in the sense that I have written far more papers with him than any person. So what makes for a good collaborator?

  • Strength—A good collaborator should of course be a strong researcher in my area of interest and Harry certainly fits that bill. But there are many great complexity theorists I have hardly or never worked with.
  • Compatibility of Strengths—The strengths should complement each other nicely. Good collaborators know their areas well and can quickly focus the inherently difficult parts of a problem and have different tools and approaches they can bring to the table.
  • Respect—Good collaborators need to trust and respect each others ability and judgment.
  • Philosophy—Long-Term collaborators need to share beliefs on what problems are important and worth working on.
  • Personality—You need to have a friendly relationship outside of work. It helps immensely if your respective families get along.
  • Luck—Finding the right problems to work on together at the right time. You need a good first collaboration before you start making time for further collaborations.
  • Distance—This seems counterintuitive but two people in the same geographical area rarely have a long history of collaboration. It's hard to make time for working together when you are in close proximity. Also two people who see each other constantly get tired of working with each other no matter how compatible they are. Better to keep in email contact and have several short and long visits where one can allocate time for the other.
There is something else that I can't perfectly describe where something just "clicks" when you have someone you can work with well.

Friday, March 11, 2005

The Reality of Virtual Pets

The Tamagotchi craze has hit my daughters' school. The Tamagotchi is a small toy with a few buttons and a screen where you can play with a cartoonish creature. You can feed and play with your creature to make him/her happy. Two people can point their devices at each other and their Tamagotchis will play and possibly "mate".

First my older daughter's fourth grade friends gave her a Tamagotchi and soon after my younger first grader also wanted one and we relented. She then convinced several of her first grade friends that they needed them which caused some parents to call us mostly because they got these Tamagotchi and nobody could figure out how to work them. My first grader is giving out lessons tomorrow.

These devices beep when they need attention—food or playing or needing cleanup after an "accident". Don't attend to them and they will die. My kids pay constant attention to the Tamagotchi and it is sometimes hard to bring them back to reality. My youngest, a couple of days ago, got all excited when her Tamagotchi learned to go to the bathroom by himself. It didn't seem like all that long ago that I got excited about the same thing for her.

Luckily young kids are fickle and the Tamagotchi craze will soon pass. But move over Godzilla, the scariest monster from Japan is a friendly looking creature in a plastic case.

Wednesday, March 09, 2005

NP-complete Problems and Physical Reality

Scott Aaronson has a fun paper in the complexity column of the new SIGACT News where he addresses the question: Can NP-complete problems be solved efficiently in the physical universe? Worth reading though I can sum up the answer in one word: No.

Tuesday, March 08, 2005

Favorite Theorems: Efficient Computation

February Edition

How do we formalize the notion of efficient computation? Two important papers from the 60's suggest polynomial time algorithms are efficient though both caution against equating the concepts.

Paths, Trees and Flowers, Jack Edmonds, 1965.

The Intrinsic Computational Difficulty of Functions, Alan Cobham, 1964.

Edmonds paper will always be best known for giving the first efficient algorithm for matching on general graphs. But I list it here because of a section of his paper labeled "Digression". Edmonds talks about the difference between exponential and algebraic (polynomial) order though he cautions against any rigid criteria for efficiency.

An explanation is due on the use of the words "efficient algorithm"…I am not prepared to set up the machinery necessary to give it formal meaning, nor is the present context appropriate for doing this…For practical purposes the difference between algebraic and exponential order is more crucial than the difference between [computable and not computable]…It would be unfortunate for any rigid criterion to inhibit the practical development of algorithms which are either not known or known not to conform nicely to the criterion…However, if only to motivate the search for good, practical algorithms, it is important to realize that it is mathematically sensible even to question their existence.
Cobham defined the class we now call P as important because of its machine independence.
For several reasons the class P seems a natural one to consider. For one thing, if we formalize the definition relative to various general classes of computing machines we seem always to end up with the same well-defined class of functions. Thus we can give a mathematical characterization of P having some confidence it characterizes correctly our informally defined class. This class then turns out to have several natural closure properties, being closed in particular under explicit transformation, composition and limited recursion on notation (digit-by-digit recursion).
Cobham also offers a caution.
The problem is reminiscent of, and obviously closely related to, that of the formalization of the notion of effectiveness. But the emphasis is different in that the physical aspects of the computation process are here of predominant concern.
Perhaps Cobham realized there might be future models of computation that may not correspond to his class P. Later developments of randomized and quantum computation will show that perhaps we cannot have a fixed notion of efficient computation.

Monday, March 07, 2005

The Researcher's Dilemma

I finally read Clayton's Christensen's The Innovator's Dilemma. A few years ago an NEC executive gave a talk using the ideas in the book to explain NEC's interest in quantum computing.

Christensen divides new technologies into two groups: sustaining and disruptive. Sustaining technology helps improve a product for its current customers for example Intel building a faster microprocessor. Disruptive technology is technology that is not initially useful for a big company's current clients. So other newer and smaller companies develop the technology for a niche market. But eventually the technology improves to meet the needs of the original big company's customers at a much lower cost. At this time the big company cannot catch up with the new technology and loses their main business to the newer startups. Christensen gives many examples such as the development of smaller disk drives and the advent of discount stores like Target hurting bigger retailers like Sears.

The book makes some solid arguments but determining which technologies will be disruptive is quite difficult. So companies need to place lots of bets perhaps why NEC funds quantum computing research just in case it becomes a disruptive technology in the future.

Do the same concepts apply to research? In complexity we have had some disruptive technologies, for example, the PCP theorem for hardness of approximation or the idea of Nisan and Wigderson of using hard languages to derandomize, or going way back the whole concept of NP-completeness. Other concepts like circuit complexity have not been as disruptive as originally thought. Also one could view tools like the probabilistic method or extractors as disruptive.

What could happen to large companies can also happen to researchers. Suppose George is a researcher who creates new results building on current ideas with small twists (sustaining technology). George ignores some new ideas in complexity because it doesn't seem to help him prove new results in his area. But as those new technologies develop they later allow others to go well beyond what George has done. Now George is behind the curve on the new disruptive technology and can no longer play an important role even in his own field.

What can George do? He can learn the new techniques or he can change fields. And often the newcomers never properly learn the old tools and George can still pull a few surprises out of his hat.

Friday, March 04, 2005

Finding Duplicates

Here is an interesting problem given by Muthukrishnan during his talk in the New Horizons workshop.

Start with an array A of n+1 entries each consisting of an integer between 1 and n. By the pigeonhole principle there must be some i≠j and a w such that A(i)=A(j)=w. The goal is to find w. Depending on A there may be several such w, we want to find any one of them. The catch is that you only get to use O(log n) bits of memory.

First a warm-up puzzle: Find w using only O(n) queries to entries of A (remember you only get O(log n) space). Hint: Use pointer chasing.

Now suppose A is streamed, that is we get in order A(1),A(2),…,A(n+1) and then get another pass etc. How many passes do you need to find a w?

You can find w in n+1 passes just by trying each possible value for w. With a little work you can use O(log n) passes doing a binary search.

Muthukrishnan asks whether the number of passes needed is Ω(log n) or O(1) or something in between.

Wednesday, March 02, 2005

Theory in Japan

Japan has produced some excellent theorists, like Seinosuke Toda who won the 1998 Gödel prize for his paper reducing the polynomial-time hierarchy to counting solutions of NP problems. But Japan has not had the large international impact in theoretical computer science as countries like Israel, India or Hungary.

In this trip I am seeing signs that this may change for the better. The conference is kicking off a new project New Horizons in Computing that shows a serious commitment of the Japanese government to computer science theory. The large turnout, over 130 registrants the majority of which are students, shows a keen interest in theory from the academic side as well.

I'd like to see more Japanese go abroad for graduate work and postdoc positions and more Japanese universities making theory an important part of their computer science programs. But given the excitement I see at this meeting I am very hopeful that Japan will also become a true theory powerhouse in the near future.